Test Report: KVM_Linux_crio 12230

                    
                      098adff14f97e55ded5626b0a90c858c09622337:2021-08-13:19986
                    
                

Test fail (13/269)

x
+
TestAddons/parallel/Ingress (242.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:343: "ingress-nginx-admission-create-lrs4t" [fd4f7bc3-556e-4421-bdf7-5a4ddba42249] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 13.33887ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210813200811-30853 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210813200811-30853 replace --force -f testdata/nginx-pod-svc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [2c4e23a2-c3ff-4a34-a0ac-c6460e9e726c] Pending
helpers_test.go:343: "nginx" [2c4e23a2-c3ff-4a34-a0ac-c6460e9e726c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [2c4e23a2-c3ff-4a34-a0ac-c6460e9e726c] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.049174707s
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210813200811-30853 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (32.027575157s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210813200811-30853 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (31.824659823s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210813200811-30853 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (32.194838471s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:224: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210813200811-30853 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210813200811-30853 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (31.856449465s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210813200811-30853 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (32.119912925s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210813200811-30853 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (31.794994925s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:262: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:265: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 addons disable ingress --alsologtostderr -v=1
addons_test.go:265: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200811-30853 addons disable ingress --alsologtostderr -v=1: (29.183511179s)
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210813200811-30853 -n addons-20210813200811-30853
helpers_test.go:245: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200811-30853 logs -n 25: (1.450737374s)
helpers_test.go:253: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                Args                |              Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                              | download-only-20210813200748-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:08:10 UTC | Fri, 13 Aug 2021 20:08:11 UTC |
	| delete  | -p                                 | download-only-20210813200748-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:08:11 UTC | Fri, 13 Aug 2021 20:08:11 UTC |
	|         | download-only-20210813200748-30853 |                                    |         |         |                               |                               |
	| delete  | -p                                 | download-only-20210813200748-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:08:11 UTC | Fri, 13 Aug 2021 20:08:11 UTC |
	|         | download-only-20210813200748-30853 |                                    |         |         |                               |                               |
	| start   | -p addons-20210813200811-30853     | addons-20210813200811-30853        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:08:11 UTC | Fri, 13 Aug 2021 20:11:24 UTC |
	|         | --wait=true --memory=4000          |                                    |         |         |                               |                               |
	|         | --alsologtostderr                  |                                    |         |         |                               |                               |
	|         | --addons=registry                  |                                    |         |         |                               |                               |
	|         | --addons=metrics-server            |                                    |         |         |                               |                               |
	|         | --addons=olm                       |                                    |         |         |                               |                               |
	|         | --addons=volumesnapshots           |                                    |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver       |                                    |         |         |                               |                               |
	|         | --driver=kvm2                      |                                    |         |         |                               |                               |
	|         | --container-runtime=crio           |                                    |         |         |                               |                               |
	|         | --addons=ingress                   |                                    |         |         |                               |                               |
	|         | --addons=helm-tiller               |                                    |         |         |                               |                               |
	| -p      | addons-20210813200811-30853        | addons-20210813200811-30853        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:11:38 UTC | Fri, 13 Aug 2021 20:11:53 UTC |
	|         | addons enable gcp-auth --force     |                                    |         |         |                               |                               |
	| -p      | addons-20210813200811-30853 ip     | addons-20210813200811-30853        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:12:13 UTC | Fri, 13 Aug 2021 20:12:14 UTC |
	| -p      | addons-20210813200811-30853        | addons-20210813200811-30853        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:12:14 UTC | Fri, 13 Aug 2021 20:12:15 UTC |
	|         | addons disable registry            |                                    |         |         |                               |                               |
	|         | --alsologtostderr -v=1             |                                    |         |         |                               |                               |
	| -p      | addons-20210813200811-30853        | addons-20210813200811-30853        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:12:20 UTC | Fri, 13 Aug 2021 20:12:21 UTC |
	|         | addons disable metrics-server      |                                    |         |         |                               |                               |
	|         | --alsologtostderr -v=1             |                                    |         |         |                               |                               |
	| -p      | addons-20210813200811-30853        | addons-20210813200811-30853        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:13:02 UTC | Fri, 13 Aug 2021 20:13:03 UTC |
	|         | addons disable helm-tiller         |                                    |         |         |                               |                               |
	|         | --alsologtostderr -v=1             |                                    |         |         |                               |                               |
	| -p      | addons-20210813200811-30853        | addons-20210813200811-30853        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:12:58 UTC | Fri, 13 Aug 2021 20:13:10 UTC |
	|         | addons disable gcp-auth            |                                    |         |         |                               |                               |
	|         | --alsologtostderr -v=1             |                                    |         |         |                               |                               |
	| -p      | addons-20210813200811-30853        | addons-20210813200811-30853        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:13:17 UTC | Fri, 13 Aug 2021 20:13:24 UTC |
	|         | addons disable                     |                                    |         |         |                               |                               |
	|         | csi-hostpath-driver                |                                    |         |         |                               |                               |
	|         | --alsologtostderr -v=1             |                                    |         |         |                               |                               |
	| -p      | addons-20210813200811-30853        | addons-20210813200811-30853        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:13:24 UTC | Fri, 13 Aug 2021 20:13:25 UTC |
	|         | addons disable volumesnapshots     |                                    |         |         |                               |                               |
	|         | --alsologtostderr -v=1             |                                    |         |         |                               |                               |
	| -p      | addons-20210813200811-30853        | addons-20210813200811-30853        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:16:35 UTC | Fri, 13 Aug 2021 20:17:04 UTC |
	|         | addons disable ingress             |                                    |         |         |                               |                               |
	|         | --alsologtostderr -v=1             |                                    |         |         |                               |                               |
	|---------|------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:08:11
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:08:11.648151   31202 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:08:11.648221   31202 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:11.648247   31202 out.go:311] Setting ErrFile to fd 2...
	I0813 20:08:11.648250   31202 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:11.648338   31202 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:08:11.649049   31202 out.go:305] Setting JSON to false
	I0813 20:08:11.683369   31202 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":6654,"bootTime":1628878638,"procs":143,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:08:11.683456   31202 start.go:121] virtualization: kvm guest
	I0813 20:08:11.685761   31202 out.go:177] * [addons-20210813200811-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:08:11.687219   31202 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:08:11.685912   31202 notify.go:169] Checking for updates...
	I0813 20:08:11.688640   31202 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:08:11.689962   31202 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:08:11.691300   31202 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:08:11.691469   31202 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:08:11.723894   31202 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 20:08:11.723917   31202 start.go:278] selected driver: kvm2
	I0813 20:08:11.723923   31202 start.go:751] validating driver "kvm2" against <nil>
	I0813 20:08:11.723937   31202 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:08:11.724896   31202 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:08:11.725080   31202 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:08:11.736097   31202 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:08:11.736144   31202 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:08:11.736317   31202 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:08:11.736343   31202 cni.go:93] Creating CNI manager for ""
	I0813 20:08:11.736350   31202 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:08:11.736358   31202 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 20:08:11.736366   31202 start_flags.go:277] config:
	{Name:addons-20210813200811-30853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210813200811-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:11.736448   31202 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:08:11.738324   31202 out.go:177] * Starting control plane node addons-20210813200811-30853 in cluster addons-20210813200811-30853
	I0813 20:08:11.738349   31202 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:08:11.738380   31202 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:08:11.738402   31202 cache.go:56] Caching tarball of preloaded images
	I0813 20:08:11.738522   31202 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:08:11.738544   31202 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:08:11.738794   31202 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/config.json ...
	I0813 20:08:11.738826   31202 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/config.json: {Name:mkf04fe93086d045f7851320b0f4d3ce470e7908 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:08:11.738988   31202 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:08:11.739069   31202 start.go:313] acquiring machines lock for addons-20210813200811-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:08:11.739156   31202 start.go:317] acquired machines lock for "addons-20210813200811-30853" in 69.435µs
	I0813 20:08:11.739184   31202 start.go:89] Provisioning new machine with config: &{Name:addons-20210813200811-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.21.3 ClusterName:addons-20210813200811-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:08:11.739257   31202 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 20:08:11.741021   31202 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0813 20:08:11.741128   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:08:11.741174   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:08:11.750719   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44953
	I0813 20:08:11.751315   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:08:11.751903   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:08:11.751938   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:08:11.752316   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:08:11.752469   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetMachineName
	I0813 20:08:11.752605   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:08:11.752718   31202 start.go:160] libmachine.API.Create for "addons-20210813200811-30853" (driver="kvm2")
	I0813 20:08:11.752750   31202 client.go:168] LocalClient.Create starting
	I0813 20:08:11.752787   31202 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:08:12.077785   31202 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:08:12.170643   31202 main.go:130] libmachine: Running pre-create checks...
	I0813 20:08:12.170676   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .PreCreateCheck
	I0813 20:08:12.171176   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetConfigRaw
	I0813 20:08:12.171707   31202 main.go:130] libmachine: Creating machine...
	I0813 20:08:12.171727   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Create
	I0813 20:08:12.171878   31202 main.go:130] libmachine: (addons-20210813200811-30853) Creating KVM machine...
	I0813 20:08:12.174668   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found existing default KVM network
	I0813 20:08:12.175821   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:12.175658   31227 network.go:288] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc0000102d8] misses:0}
	I0813 20:08:12.175857   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:12.175764   31227 network.go:235] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:08:12.193373   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | trying to create private KVM network mk-addons-20210813200811-30853 192.168.39.0/24...
	I0813 20:08:12.475280   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | private KVM network mk-addons-20210813200811-30853 192.168.39.0/24 created
	I0813 20:08:12.475342   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:12.475237   31227 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:08:12.475365   31202 main.go:130] libmachine: (addons-20210813200811-30853) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853 ...
	I0813 20:08:12.475394   31202 main.go:130] libmachine: (addons-20210813200811-30853) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 20:08:12.475414   31202 main.go:130] libmachine: (addons-20210813200811-30853) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 20:08:12.654142   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:12.654015   31227 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa...
	I0813 20:08:12.705051   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:12.704958   31227 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/addons-20210813200811-30853.rawdisk...
	I0813 20:08:12.705085   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Writing magic tar header
	I0813 20:08:12.705107   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Writing SSH key tar header
	I0813 20:08:12.705133   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:12.705090   31227 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853 ...
	I0813 20:08:12.705293   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853
	I0813 20:08:12.705340   31202 main.go:130] libmachine: (addons-20210813200811-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853 (perms=drwx------)
	I0813 20:08:12.705358   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 20:08:12.705388   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:08:12.705412   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337
	I0813 20:08:12.705434   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 20:08:12.705453   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Checking permissions on dir: /home/jenkins
	I0813 20:08:12.705472   31202 main.go:130] libmachine: (addons-20210813200811-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 20:08:12.705489   31202 main.go:130] libmachine: (addons-20210813200811-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 20:08:12.705507   31202 main.go:130] libmachine: (addons-20210813200811-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 20:08:12.705530   31202 main.go:130] libmachine: (addons-20210813200811-30853) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 20:08:12.705547   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Checking permissions on dir: /home
	I0813 20:08:12.705561   31202 main.go:130] libmachine: (addons-20210813200811-30853) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 20:08:12.705575   31202 main.go:130] libmachine: (addons-20210813200811-30853) Creating domain...
	I0813 20:08:12.705589   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Skipping /home - not owner
	I0813 20:08:12.729426   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:ed:84:66 in network default
	I0813 20:08:12.729887   31202 main.go:130] libmachine: (addons-20210813200811-30853) Ensuring networks are active...
	I0813 20:08:12.729931   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:12.731797   31202 main.go:130] libmachine: (addons-20210813200811-30853) Ensuring network default is active
	I0813 20:08:12.732097   31202 main.go:130] libmachine: (addons-20210813200811-30853) Ensuring network mk-addons-20210813200811-30853 is active
	I0813 20:08:12.732556   31202 main.go:130] libmachine: (addons-20210813200811-30853) Getting domain xml...
	I0813 20:08:12.734335   31202 main.go:130] libmachine: (addons-20210813200811-30853) Creating domain...
	I0813 20:08:13.155135   31202 main.go:130] libmachine: (addons-20210813200811-30853) Waiting to get IP...
	I0813 20:08:13.155794   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:13.156289   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | unable to find current IP address of domain addons-20210813200811-30853 in network mk-addons-20210813200811-30853
	I0813 20:08:13.156389   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:13.156325   31227 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 20:08:13.420735   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:13.421182   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | unable to find current IP address of domain addons-20210813200811-30853 in network mk-addons-20210813200811-30853
	I0813 20:08:13.421203   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:13.421142   31227 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 20:08:13.803734   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:13.804276   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | unable to find current IP address of domain addons-20210813200811-30853 in network mk-addons-20210813200811-30853
	I0813 20:08:13.804304   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:13.804226   31227 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 20:08:14.228749   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:14.229168   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | unable to find current IP address of domain addons-20210813200811-30853 in network mk-addons-20210813200811-30853
	I0813 20:08:14.229222   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:14.229115   31227 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 20:08:14.703561   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:14.704088   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | unable to find current IP address of domain addons-20210813200811-30853 in network mk-addons-20210813200811-30853
	I0813 20:08:14.704116   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:14.704037   31227 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 20:08:15.292776   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:15.293318   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | unable to find current IP address of domain addons-20210813200811-30853 in network mk-addons-20210813200811-30853
	I0813 20:08:15.293352   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:15.293279   31227 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 20:08:16.129047   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:16.129621   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | unable to find current IP address of domain addons-20210813200811-30853 in network mk-addons-20210813200811-30853
	I0813 20:08:16.129654   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:16.129566   31227 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 20:08:16.877962   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:16.878406   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | unable to find current IP address of domain addons-20210813200811-30853 in network mk-addons-20210813200811-30853
	I0813 20:08:16.878438   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:16.878360   31227 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 20:08:17.867735   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:17.868219   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | unable to find current IP address of domain addons-20210813200811-30853 in network mk-addons-20210813200811-30853
	I0813 20:08:17.868252   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:17.868173   31227 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 20:08:19.059563   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:19.060108   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | unable to find current IP address of domain addons-20210813200811-30853 in network mk-addons-20210813200811-30853
	I0813 20:08:19.060140   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:19.060051   31227 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 20:08:20.738778   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:20.739330   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | unable to find current IP address of domain addons-20210813200811-30853 in network mk-addons-20210813200811-30853
	I0813 20:08:20.739362   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:20.739274   31227 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 20:08:23.087561   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:23.087956   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | unable to find current IP address of domain addons-20210813200811-30853 in network mk-addons-20210813200811-30853
	I0813 20:08:23.087979   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:23.087935   31227 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 20:08:26.458523   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:26.458958   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | unable to find current IP address of domain addons-20210813200811-30853 in network mk-addons-20210813200811-30853
	I0813 20:08:26.458988   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | I0813 20:08:26.458902   31227 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0813 20:08:29.580415   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:29.580949   31202 main.go:130] libmachine: (addons-20210813200811-30853) Found IP for machine: 192.168.39.144
	I0813 20:08:29.580985   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has current primary IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:29.581000   31202 main.go:130] libmachine: (addons-20210813200811-30853) Reserving static IP address...
	I0813 20:08:29.581300   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | unable to find host DHCP lease matching {name: "addons-20210813200811-30853", mac: "52:54:00:e9:e6:68", ip: "192.168.39.144"} in network mk-addons-20210813200811-30853
	I0813 20:08:29.628339   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Getting to WaitForSSH function...
	I0813 20:08:29.628378   31202 main.go:130] libmachine: (addons-20210813200811-30853) Reserved static IP address: 192.168.39.144
	I0813 20:08:29.628396   31202 main.go:130] libmachine: (addons-20210813200811-30853) Waiting for SSH to be available...
	I0813 20:08:29.633341   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:29.633693   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:29.633726   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:29.633834   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Using SSH client type: external
	I0813 20:08:29.633864   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa (-rw-------)
	I0813 20:08:29.633911   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 20:08:29.633929   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | About to run SSH command:
	I0813 20:08:29.633943   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | exit 0
	I0813 20:08:29.762121   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 20:08:29.762541   31202 main.go:130] libmachine: (addons-20210813200811-30853) KVM machine creation complete!
	I0813 20:08:29.762641   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetConfigRaw
	I0813 20:08:29.763284   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:08:29.763467   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:08:29.763663   31202 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 20:08:29.763682   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetState
	I0813 20:08:29.766152   31202 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 20:08:29.766178   31202 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 20:08:29.766187   31202 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 20:08:29.766196   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:08:29.770620   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:29.770964   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:29.770987   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:29.771111   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:08:29.771293   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:08:29.771469   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:08:29.771586   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:08:29.771745   31202 main.go:130] libmachine: Using SSH client type: native
	I0813 20:08:29.771981   31202 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0813 20:08:29.771996   31202 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 20:08:29.885961   31202 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:08:29.885989   31202 main.go:130] libmachine: Detecting the provisioner...
	I0813 20:08:29.885997   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:08:29.891354   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:29.891748   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:29.891783   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:29.891879   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:08:29.892055   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:08:29.892224   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:08:29.892369   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:08:29.892516   31202 main.go:130] libmachine: Using SSH client type: native
	I0813 20:08:29.892664   31202 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0813 20:08:29.892675   31202 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 20:08:30.007695   31202 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 20:08:30.007785   31202 main.go:130] libmachine: found compatible host: buildroot
	I0813 20:08:30.007803   31202 main.go:130] libmachine: Provisioning with buildroot...
	I0813 20:08:30.007815   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetMachineName
	I0813 20:08:30.008046   31202 buildroot.go:166] provisioning hostname "addons-20210813200811-30853"
	I0813 20:08:30.008075   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetMachineName
	I0813 20:08:30.008242   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:08:30.013117   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:30.013406   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:30.013434   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:30.013530   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:08:30.013664   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:08:30.013791   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:08:30.013881   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:08:30.014010   31202 main.go:130] libmachine: Using SSH client type: native
	I0813 20:08:30.014155   31202 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0813 20:08:30.014170   31202 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210813200811-30853 && echo "addons-20210813200811-30853" | sudo tee /etc/hostname
	I0813 20:08:30.131333   31202 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210813200811-30853
	
	I0813 20:08:30.131380   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:08:30.137152   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:30.137478   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:30.137514   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:30.137604   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:08:30.137769   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:08:30.137916   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:08:30.138069   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:08:30.138210   31202 main.go:130] libmachine: Using SSH client type: native
	I0813 20:08:30.138364   31202 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0813 20:08:30.138391   31202 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210813200811-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210813200811-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210813200811-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:08:30.257176   31202 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:08:30.257214   31202 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:08:30.257238   31202 buildroot.go:174] setting up certificates
	I0813 20:08:30.257254   31202 provision.go:83] configureAuth start
	I0813 20:08:30.257264   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetMachineName
	I0813 20:08:30.257519   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetIP
	I0813 20:08:30.262539   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:30.262846   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:30.262873   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:30.263025   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:08:30.267067   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:30.267310   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:30.267341   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:30.267401   31202 provision.go:138] copyHostCerts
	I0813 20:08:30.267496   31202 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:08:30.267763   31202 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:08:30.267851   31202 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:08:30.267909   31202 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.addons-20210813200811-30853 san=[192.168.39.144 192.168.39.144 localhost 127.0.0.1 minikube addons-20210813200811-30853]
	I0813 20:08:30.342416   31202 provision.go:172] copyRemoteCerts
	I0813 20:08:30.342483   31202 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:08:30.342514   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:08:30.347262   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:30.347525   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:30.347558   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:30.347688   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:08:30.347833   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:08:30.347928   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:08:30.348026   31202 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa Username:docker}
	I0813 20:08:30.429436   31202 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:08:30.447064   31202 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0813 20:08:30.464237   31202 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:08:30.481257   31202 provision.go:86] duration metric: configureAuth took 223.990616ms
	I0813 20:08:30.481279   31202 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:08:30.481444   31202 config.go:177] Loaded profile config "addons-20210813200811-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:08:30.481554   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:08:30.486691   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:30.487021   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:30.487050   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:30.487207   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:08:30.487366   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:08:30.487486   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:08:30.487578   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:08:30.487676   31202 main.go:130] libmachine: Using SSH client type: native
	I0813 20:08:30.487838   31202 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0813 20:08:30.487861   31202 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:08:31.142525   31202 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:08:31.142565   31202 main.go:130] libmachine: Checking connection to Docker...
	I0813 20:08:31.142578   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetURL
	I0813 20:08:31.145473   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Using libvirt version 3000000
	I0813 20:08:31.150055   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:31.150397   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:31.150427   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:31.150565   31202 main.go:130] libmachine: Docker is up and running!
	I0813 20:08:31.150583   31202 main.go:130] libmachine: Reticulating splines...
	I0813 20:08:31.150592   31202 client.go:171] LocalClient.Create took 19.397831226s
	I0813 20:08:31.150611   31202 start.go:168] duration metric: libmachine.API.Create for "addons-20210813200811-30853" took 19.397894228s
	I0813 20:08:31.150620   31202 start.go:267] post-start starting for "addons-20210813200811-30853" (driver="kvm2")
	I0813 20:08:31.150625   31202 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:08:31.150647   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:08:31.150834   31202 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:08:31.150875   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:08:31.155152   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:31.155509   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:31.155549   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:31.155633   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:08:31.155792   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:08:31.155943   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:08:31.156096   31202 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa Username:docker}
	I0813 20:08:31.238431   31202 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:08:31.243561   31202 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:08:31.243595   31202 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:08:31.243663   31202 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:08:31.243694   31202 start.go:270] post-start completed in 93.067974ms
	I0813 20:08:31.243730   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetConfigRaw
	I0813 20:08:31.244218   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetIP
	I0813 20:08:31.249161   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:31.249496   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:31.249533   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:31.249701   31202 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/config.json ...
	I0813 20:08:31.249864   31202 start.go:129] duration metric: createHost completed in 19.510597019s
	I0813 20:08:31.249879   31202 start.go:80] releasing machines lock for "addons-20210813200811-30853", held for 19.51070885s
	I0813 20:08:31.249920   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:08:31.250115   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetIP
	I0813 20:08:31.254296   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:31.254612   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:31.254659   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:31.254770   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:08:31.254938   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:08:31.255440   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:08:31.255701   31202 ssh_runner.go:149] Run: systemctl --version
	I0813 20:08:31.255729   31202 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:08:31.255731   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:08:31.255771   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:08:31.263375   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:31.263438   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:31.263730   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:31.263760   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:31.263779   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:31.263792   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:31.263896   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:08:31.264043   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:08:31.264108   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:08:31.264203   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:08:31.264346   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:08:31.264350   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:08:31.264559   31202 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa Username:docker}
	I0813 20:08:31.264558   31202 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa Username:docker}
	I0813 20:08:31.354742   31202 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:08:31.354919   31202 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:08:35.362469   31202 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.007520373s)
	I0813 20:08:35.362607   31202 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 20:08:35.362668   31202 ssh_runner.go:149] Run: which lz4
	I0813 20:08:35.367129   31202 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 20:08:35.371511   31202 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:08:35.371537   31202 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0813 20:08:37.579250   31202 crio.go:362] Took 2.212168 seconds to copy over tarball
	I0813 20:08:37.579317   31202 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 20:08:42.881486   31202 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.302141631s)
	I0813 20:08:42.881608   31202 crio.go:369] Took 5.302320 seconds t extract the tarball
	I0813 20:08:42.881637   31202 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 20:08:42.920606   31202 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:08:42.933154   31202 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:08:42.943649   31202 docker.go:153] disabling docker service ...
	I0813 20:08:42.943705   31202 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:08:42.954214   31202 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:08:42.963210   31202 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:08:43.106553   31202 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:08:43.239413   31202 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:08:43.251108   31202 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:08:43.265180   31202 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:08:43.273195   31202 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:08:43.280086   31202 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:08:43.280143   31202 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:08:43.295895   31202 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:08:43.303207   31202 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:08:43.434124   31202 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:08:43.713962   31202 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:08:43.714036   31202 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:08:43.719968   31202 start.go:413] Will wait 60s for crictl version
	I0813 20:08:43.720034   31202 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:08:43.752170   31202 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 20:08:43.752256   31202 ssh_runner.go:149] Run: crio --version
	I0813 20:08:43.875206   31202 ssh_runner.go:149] Run: crio --version
	I0813 20:08:48.688292   31202 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 20:08:48.688525   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetIP
	I0813 20:08:48.693833   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:48.694214   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:08:48.694241   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:08:48.694431   31202 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 20:08:48.699820   31202 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:08:48.711745   31202 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:08:48.711808   31202 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:08:48.782726   31202 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:08:48.782752   31202 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:08:48.782807   31202 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:08:48.814408   31202 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:08:48.814438   31202 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:08:48.814607   31202 ssh_runner.go:149] Run: crio config
	I0813 20:08:49.083464   31202 cni.go:93] Creating CNI manager for ""
	I0813 20:08:49.083496   31202 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:08:49.083509   31202 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:08:49.083523   31202 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210813200811-30853 NodeName:addons-20210813200811-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.144 CgroupDriver:systemd ClientCAFile:/var/
lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:08:49.083652   31202 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "addons-20210813200811-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:08:49.083745   31202 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=addons-20210813200811-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.144 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210813200811-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:08:49.083797   31202 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:08:49.090744   31202 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:08:49.090811   31202 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:08:49.097169   31202 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (508 bytes)
	I0813 20:08:49.108919   31202 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:08:49.120407   31202 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I0813 20:08:49.132801   31202 ssh_runner.go:149] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I0813 20:08:49.136770   31202 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:08:49.146644   31202 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853 for IP: 192.168.39.144
	I0813 20:08:49.146696   31202 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:08:49.267228   31202 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt ...
	I0813 20:08:49.267258   31202 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt: {Name:mk4a63afac247de0361fc4de5c71c6d9bacaf92e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:08:49.267467   31202 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key ...
	I0813 20:08:49.267482   31202 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key: {Name:mk8cbfd2288db1a3e287ce9d81a734e1e75aa53a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:08:49.267568   31202 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:08:49.427036   31202 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt ...
	I0813 20:08:49.427069   31202 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt: {Name:mkdde6e16de00b486c0a31ef2104996d27c0160e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:08:49.427285   31202 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key ...
	I0813 20:08:49.427304   31202 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key: {Name:mk95ac24165a30f9e09be84f4a6fb0b18a9ec6b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:08:49.427465   31202 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.key
	I0813 20:08:49.427479   31202 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt with IP's: []
	I0813 20:08:49.745316   31202 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt ...
	I0813 20:08:49.745346   31202 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: {Name:mk86fac8e7b07652add915cfc1889482bd6e8e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:08:49.745547   31202 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.key ...
	I0813 20:08:49.745565   31202 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.key: {Name:mk730e2cffd089c45c9670420583e98d91c7b36a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:08:49.745694   31202 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/apiserver.key.4482163a
	I0813 20:08:49.745708   31202 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/apiserver.crt.4482163a with IP's: [192.168.39.144 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:08:49.866146   31202 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/apiserver.crt.4482163a ...
	I0813 20:08:49.866202   31202 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/apiserver.crt.4482163a: {Name:mk60be00a11ad76d3fb8489a0467d0690666b338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:08:49.866407   31202 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/apiserver.key.4482163a ...
	I0813 20:08:49.866424   31202 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/apiserver.key.4482163a: {Name:mk297c41217753ab1575e4bad9064c69aad7a681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:08:49.866539   31202 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/apiserver.crt.4482163a -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/apiserver.crt
	I0813 20:08:49.866609   31202 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/apiserver.key.4482163a -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/apiserver.key
	I0813 20:08:49.866676   31202 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/proxy-client.key
	I0813 20:08:49.866690   31202 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/proxy-client.crt with IP's: []
	I0813 20:08:49.923282   31202 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/proxy-client.crt ...
	I0813 20:08:49.923307   31202 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/proxy-client.crt: {Name:mkd1f9cefa34c48b567b1a21556950663b85e330 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:08:49.923460   31202 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/proxy-client.key ...
	I0813 20:08:49.923476   31202 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/proxy-client.key: {Name:mkee01a530d8f7c6d9512d69085471b64e4e2dce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:08:49.923656   31202 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:08:49.923701   31202 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:08:49.923738   31202 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:08:49.923774   31202 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:08:49.924906   31202 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:08:49.942328   31202 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:08:49.958553   31202 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:08:49.974388   31202 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:08:49.990088   31202 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:08:50.005914   31202 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:08:50.021948   31202 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:08:50.037911   31202 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:08:50.053800   31202 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:08:50.069534   31202 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:08:50.081209   31202 ssh_runner.go:149] Run: openssl version
	I0813 20:08:50.087384   31202 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:08:50.095178   31202 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:08:50.100037   31202 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:08:50.100078   31202 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:08:50.106019   31202 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:08:50.113923   31202 kubeadm.go:390] StartCluster: {Name:addons-20210813200811-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Cl
usterName:addons-20210813200811-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:50.114018   31202 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:08:50.114050   31202 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:08:50.147749   31202 cri.go:76] found id: ""
	I0813 20:08:50.147813   31202 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:08:50.155389   31202 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:08:50.161751   31202 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:08:50.168324   31202 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:08:50.168361   31202 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 20:09:10.987516   31202 out.go:204]   - Generating certificates and keys ...
	I0813 20:09:10.990383   31202 out.go:204]   - Booting up control plane ...
	I0813 20:09:10.993138   31202 out.go:204]   - Configuring RBAC rules ...
	I0813 20:09:10.995138   31202 cni.go:93] Creating CNI manager for ""
	I0813 20:09:10.995157   31202 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:09:10.996784   31202 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 20:09:10.996874   31202 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 20:09:11.012356   31202 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 20:09:11.027245   31202 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:09:11.027318   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:11.027351   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=addons-20210813200811-30853 minikube.k8s.io/updated_at=2021_08_13T20_09_11_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:11.352789   31202 ops.go:34] apiserver oom_adj: -16
	I0813 20:09:11.352938   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:11.973803   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:12.473766   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:12.974545   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:13.474453   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:13.974042   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:14.474060   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:14.973657   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:15.474159   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:15.974712   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:16.474424   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:16.974143   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:17.473921   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:17.974518   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:18.474005   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:18.973868   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:19.473784   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:19.974403   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:20.474673   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:20.974015   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:21.474338   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:21.974221   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:22.474423   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:22.974582   31202 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:23.199586   31202 kubeadm.go:985] duration metric: took 12.172325006s to wait for elevateKubeSystemPrivileges.
	I0813 20:09:23.199630   31202 kubeadm.go:392] StartCluster complete in 33.085711825s
	I0813 20:09:23.199657   31202 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:23.199851   31202 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:09:23.200483   31202 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:23.753234   31202 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210813200811-30853" rescaled to 1
	I0813 20:09:23.753300   31202 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:09:23.755323   31202 out.go:177] * Verifying Kubernetes components...
	I0813 20:09:23.753447   31202 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:09:23.755450   31202 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:09:23.753471   31202 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress helm-tiller]
	I0813 20:09:23.753581   31202 config.go:177] Loaded profile config "addons-20210813200811-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:09:23.755559   31202 addons.go:59] Setting volumesnapshots=true in profile "addons-20210813200811-30853"
	I0813 20:09:23.755565   31202 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210813200811-30853"
	I0813 20:09:23.755577   31202 addons.go:135] Setting addon volumesnapshots=true in "addons-20210813200811-30853"
	I0813 20:09:23.755579   31202 addons.go:59] Setting helm-tiller=true in profile "addons-20210813200811-30853"
	I0813 20:09:23.755588   31202 addons.go:59] Setting ingress=true in profile "addons-20210813200811-30853"
	I0813 20:09:23.755596   31202 addons.go:135] Setting addon helm-tiller=true in "addons-20210813200811-30853"
	I0813 20:09:23.755602   31202 addons.go:135] Setting addon ingress=true in "addons-20210813200811-30853"
	I0813 20:09:23.755610   31202 host.go:66] Checking if "addons-20210813200811-30853" exists ...
	I0813 20:09:23.755617   31202 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210813200811-30853"
	I0813 20:09:23.755628   31202 host.go:66] Checking if "addons-20210813200811-30853" exists ...
	I0813 20:09:23.755632   31202 addons.go:59] Setting registry=true in profile "addons-20210813200811-30853"
	I0813 20:09:23.755640   31202 host.go:66] Checking if "addons-20210813200811-30853" exists ...
	I0813 20:09:23.755650   31202 host.go:66] Checking if "addons-20210813200811-30853" exists ...
	I0813 20:09:23.755659   31202 addons.go:135] Setting addon registry=true in "addons-20210813200811-30853"
	I0813 20:09:23.755714   31202 host.go:66] Checking if "addons-20210813200811-30853" exists ...
	I0813 20:09:23.756091   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.756108   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.756108   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.756121   31202 addons.go:59] Setting default-storageclass=true in profile "addons-20210813200811-30853"
	I0813 20:09:23.756125   31202 addons.go:59] Setting storage-provisioner=true in profile "addons-20210813200811-30853"
	I0813 20:09:23.756141   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.756173   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.756218   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.756133   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.756148   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.755573   31202 addons.go:59] Setting metrics-server=true in profile "addons-20210813200811-30853"
	I0813 20:09:23.756402   31202 addons.go:135] Setting addon metrics-server=true in "addons-20210813200811-30853"
	I0813 20:09:23.756433   31202 host.go:66] Checking if "addons-20210813200811-30853" exists ...
	I0813 20:09:23.756124   31202 addons.go:59] Setting olm=true in profile "addons-20210813200811-30853"
	I0813 20:09:23.756493   31202 addons.go:135] Setting addon olm=true in "addons-20210813200811-30853"
	I0813 20:09:23.756519   31202 host.go:66] Checking if "addons-20210813200811-30853" exists ...
	I0813 20:09:23.756153   31202 addons.go:135] Setting addon storage-provisioner=true in "addons-20210813200811-30853"
	W0813 20:09:23.756603   31202 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:09:23.756631   31202 host.go:66] Checking if "addons-20210813200811-30853" exists ...
	I0813 20:09:23.756146   31202 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210813200811-30853"
	I0813 20:09:23.756844   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.756874   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.756898   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.756920   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.756108   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.757025   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.757038   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.757040   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.757063   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.757065   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.770748   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39517
	I0813 20:09:23.771186   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.771783   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.771809   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.772194   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.772748   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.772784   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.773881   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0813 20:09:23.774315   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.774770   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.774791   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.774889   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36181
	I0813 20:09:23.775206   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.775317   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.775800   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.775819   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.775826   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.775851   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.776270   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.776802   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.776846   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.780624   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39045
	I0813 20:09:23.781020   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.781568   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.781587   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.781933   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.782000   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36329
	I0813 20:09:23.782569   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.783088   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.783113   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.783467   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.783992   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.784037   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.784440   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.784488   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.785776   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38607
	I0813 20:09:23.791796   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43653
	I0813 20:09:23.791807   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39003
	I0813 20:09:23.792943   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0813 20:09:23.799641   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35427
	I0813 20:09:23.803326   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.803429   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.803444   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.803535   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.803871   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.803892   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.804003   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.804031   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.804042   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.804102   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.804123   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.804245   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.804257   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.804469   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.804488   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.804545   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.804583   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.805098   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.805135   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.805185   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.805220   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.805238   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.805430   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetState
	I0813 20:09:23.805454   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.806242   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.806292   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetState
	I0813 20:09:23.806791   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.806831   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.817993   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:09:23.818017   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38449
	I0813 20:09:23.818017   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35955
	I0813 20:09:23.818020   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0813 20:09:23.820479   31202 out.go:177]   - Using image gcr.io/kubernetes-helm/tiller:v2.16.12
	I0813 20:09:23.820635   31202 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0813 20:09:23.820653   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2433 bytes)
	I0813 20:09:23.820678   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:09:23.818454   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.818492   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.818935   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.821281   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.821299   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.821847   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.821873   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.822493   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.822551   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.822764   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetState
	I0813 20:09:23.822822   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetState
	I0813 20:09:23.825017   31202 addons.go:135] Setting addon default-storageclass=true in "addons-20210813200811-30853"
	W0813 20:09:23.825042   31202 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:09:23.825079   31202 host.go:66] Checking if "addons-20210813200811-30853" exists ...
	I0813 20:09:23.825605   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.825655   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.826575   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.826596   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.827118   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.827330   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetState
	I0813 20:09:23.827486   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35391
	I0813 20:09:23.827651   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40009
	I0813 20:09:23.828344   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.828904   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.828933   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.829288   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.829489   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.829531   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetState
	I0813 20:09:23.830222   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:09:23.830303   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:09:23.830342   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.832447   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:09:23.832449   31202 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0813 20:09:23.832570   31202 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:09:23.832585   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:09:23.832607   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:09:23.830898   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:09:23.830910   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.830898   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:09:23.833036   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:09:23.833099   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.833123   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.834623   31202 out.go:177]   - Using image registry:2.7.1
	I0813 20:09:23.836034   31202 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0813 20:09:23.836093   31202 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0813 20:09:23.833241   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:09:23.833432   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.833448   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:09:23.833513   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0813 20:09:23.837600   31202 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0813 20:09:23.836106   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0813 20:09:23.837679   31202 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0813 20:09:23.837689   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0813 20:09:23.837703   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:09:23.837706   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:09:23.836280   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetState
	I0813 20:09:23.839194   31202 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0813 20:09:23.836602   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.836279   31202 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa Username:docker}
	I0813 20:09:23.838262   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.839122   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:09:23.839790   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.840779   31202 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0813 20:09:23.840796   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.841076   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:09:23.841111   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.841306   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:09:23.841502   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:09:23.841658   31202 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa Username:docker}
	I0813 20:09:23.842034   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41813
	I0813 20:09:23.842275   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.842545   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.842754   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetState
	I0813 20:09:23.843046   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:09:23.843050   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.843104   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.845031   31202 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0813 20:09:23.843395   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.844116   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33987
	I0813 20:09:23.845276   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetState
	I0813 20:09:23.845578   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.846206   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:09:23.846682   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.846885   31202 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0813 20:09:23.848564   31202 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0813 20:09:23.847012   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:09:23.848617   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.847070   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:09:23.848621   31202 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0813 20:09:23.848649   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0813 20:09:23.848652   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.848665   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:09:23.847213   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:09:23.847255   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.847251   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:09:23.847840   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:09:23.848855   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:09:23.848904   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:09:23.850931   31202 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0813 20:09:23.849006   31202 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa Username:docker}
	I0813 20:09:23.849275   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.850111   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:09:23.849171   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:09:23.852845   31202 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0813 20:09:23.851075   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.851231   31202 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa Username:docker}
	I0813 20:09:23.853343   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.854299   31202 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0813 20:09:23.854376   31202 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:09:23.855823   31202 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0813 20:09:23.854473   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.854475   31202 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:09:23.855885   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:09:23.855897   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:09:23.855910   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:09:23.857354   31202 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0813 20:09:23.855932   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.855071   31202 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:09:23.857411   31202 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:23.855040   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:09:23.858893   31202 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0813 20:09:23.857606   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:09:23.861038   31202 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0813 20:09:23.859185   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:09:23.862612   31202 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0813 20:09:23.861254   31202 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa Username:docker}
	I0813 20:09:23.861331   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.861841   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:09:23.864093   31202 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0813 20:09:23.864146   31202 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0813 20:09:23.864160   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0813 20:09:23.864176   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:09:23.864148   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:09:23.864239   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.864466   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:09:23.864609   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:09:23.864741   31202 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0813 20:09:23.864750   31202 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa Username:docker}
	I0813 20:09:23.864763   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0813 20:09:23.864782   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:09:23.870073   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.870476   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:09:23.870501   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.870659   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:09:23.870789   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:09:23.870921   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:09:23.871059   31202 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa Username:docker}
	I0813 20:09:23.871340   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.871643   31202 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44215
	I0813 20:09:23.871737   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:09:23.871764   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.871923   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:09:23.871979   31202 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:23.872040   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:09:23.872201   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:09:23.872330   31202 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa Username:docker}
	I0813 20:09:23.872450   31202 main.go:130] libmachine: Using API Version  1
	I0813 20:09:23.872469   31202 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:23.872816   31202 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:23.872977   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetState
	I0813 20:09:23.875792   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .DriverName
	I0813 20:09:23.875976   31202 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:09:23.875989   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:09:23.876003   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHHostname
	I0813 20:09:23.881069   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.881387   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:e6:68", ip: ""} in network mk-addons-20210813200811-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:26 +0000 UTC Type:0 Mac:52:54:00:e9:e6:68 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-20210813200811-30853 Clientid:01:52:54:00:e9:e6:68}
	I0813 20:09:23.881418   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | domain addons-20210813200811-30853 has defined IP address 192.168.39.144 and MAC address 52:54:00:e9:e6:68 in network mk-addons-20210813200811-30853
	I0813 20:09:23.881552   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHPort
	I0813 20:09:23.881698   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHKeyPath
	I0813 20:09:23.881850   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .GetSSHUsername
	I0813 20:09:23.881961   31202 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200811-30853/id_rsa Username:docker}
	I0813 20:09:24.093106   31202 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0813 20:09:24.093130   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0813 20:09:24.110623   31202 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:09:24.110653   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0813 20:09:24.134519   31202 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:09:24.160576   31202 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0813 20:09:24.160600   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0813 20:09:24.173500   31202 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:09:24.173526   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:09:24.183622   31202 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0813 20:09:24.183640   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0813 20:09:24.225247   31202 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:09:24.226640   31202 node_ready.go:35] waiting up to 6m0s for node "addons-20210813200811-30853" to be "Ready" ...
	I0813 20:09:24.230764   31202 node_ready.go:49] node "addons-20210813200811-30853" has status "Ready":"True"
	I0813 20:09:24.230783   31202 node_ready.go:38] duration metric: took 4.119345ms waiting for node "addons-20210813200811-30853" to be "Ready" ...
	I0813 20:09:24.230796   31202 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:09:24.242062   31202 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-gfxc4" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:24.272859   31202 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0813 20:09:24.272895   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0813 20:09:24.285707   31202 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0813 20:09:24.285725   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0813 20:09:24.292874   31202 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:09:24.299893   31202 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:09:24.299914   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:09:24.308979   31202 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0813 20:09:24.308995   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0813 20:09:24.317661   31202 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0813 20:09:24.330041   31202 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0813 20:09:24.330059   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0813 20:09:24.340692   31202 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0813 20:09:24.340708   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0813 20:09:24.375243   31202 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0813 20:09:24.375262   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0813 20:09:24.398811   31202 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0813 20:09:24.398829   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0813 20:09:24.410765   31202 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0813 20:09:24.410783   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0813 20:09:24.436503   31202 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:09:24.450211   31202 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0813 20:09:24.450233   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0813 20:09:24.456932   31202 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0813 20:09:24.484276   31202 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0813 20:09:24.541638   31202 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0813 20:09:24.541661   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0813 20:09:24.563659   31202 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0813 20:09:24.563682   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0813 20:09:24.633448   31202 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0813 20:09:24.749956   31202 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 20:09:24.749979   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0813 20:09:24.803417   31202 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0813 20:09:24.803442   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0813 20:09:25.077039   31202 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0813 20:09:25.077066   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0813 20:09:25.121190   31202 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 20:09:25.438865   31202 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0813 20:09:25.438894   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0813 20:09:25.617478   31202 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0813 20:09:25.617506   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0813 20:09:26.017947   31202 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0813 20:09:26.017968   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0813 20:09:26.207089   31202 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0813 20:09:26.207113   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0813 20:09:26.261493   31202 pod_ready.go:102] pod "coredns-558bd4d5db-gfxc4" in "kube-system" namespace has status "Ready":"False"
	I0813 20:09:26.368429   31202 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0813 20:09:26.368451   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0813 20:09:26.588410   31202 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0813 20:09:26.588433   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0813 20:09:27.122294   31202 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 20:09:27.122319   31202 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0813 20:09:27.353217   31202 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 20:09:28.149111   31202 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.014550393s)
	I0813 20:09:28.149172   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:28.149187   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:28.149472   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:28.149495   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:28.149512   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:28.149522   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:28.149747   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:28.149764   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:28.149776   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:28.149787   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:28.149786   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Closing plugin on server side
	I0813 20:09:28.150028   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Closing plugin on server side
	I0813 20:09:28.150056   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:28.150073   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:28.336647   31202 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.111355401s)
	I0813 20:09:28.336696   31202 start.go:728] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 20:09:28.548578   31202 pod_ready.go:102] pod "coredns-558bd4d5db-gfxc4" in "kube-system" namespace has status "Ready":"False"
	I0813 20:09:29.262691   31202 pod_ready.go:92] pod "coredns-558bd4d5db-gfxc4" in "kube-system" namespace has status "Ready":"True"
	I0813 20:09:29.262728   31202 pod_ready.go:81] duration metric: took 5.020642274s waiting for pod "coredns-558bd4d5db-gfxc4" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:29.262741   31202 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210813200811-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:29.276230   31202 pod_ready.go:92] pod "etcd-addons-20210813200811-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:09:29.276247   31202 pod_ready.go:81] duration metric: took 13.498733ms waiting for pod "etcd-addons-20210813200811-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:29.276256   31202 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210813200811-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:29.283855   31202 pod_ready.go:92] pod "kube-apiserver-addons-20210813200811-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:09:29.283871   31202 pod_ready.go:81] duration metric: took 7.607753ms waiting for pod "kube-apiserver-addons-20210813200811-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:29.283884   31202 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210813200811-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:29.323482   31202 pod_ready.go:92] pod "kube-controller-manager-addons-20210813200811-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:09:29.323499   31202 pod_ready.go:81] duration metric: took 39.606947ms waiting for pod "kube-controller-manager-addons-20210813200811-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:29.323509   31202 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kdr8f" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:29.330876   31202 pod_ready.go:92] pod "kube-proxy-kdr8f" in "kube-system" namespace has status "Ready":"True"
	I0813 20:09:29.330897   31202 pod_ready.go:81] duration metric: took 7.380797ms waiting for pod "kube-proxy-kdr8f" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:29.330908   31202 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210813200811-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:29.348349   31202 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.055442166s)
	I0813 20:09:29.348400   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:29.348420   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:29.348585   31202 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.030894736s)
	I0813 20:09:29.348631   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:29.348649   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:29.348664   31202 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.912128914s)
	I0813 20:09:29.348695   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:29.348714   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:29.348726   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Closing plugin on server side
	I0813 20:09:29.348738   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:29.348753   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:29.348774   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:29.348785   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:29.348862   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:29.348876   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:29.348885   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:29.348894   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:29.348899   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Closing plugin on server side
	I0813 20:09:29.348958   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Closing plugin on server side
	I0813 20:09:29.348976   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Closing plugin on server side
	I0813 20:09:29.349017   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:29.349025   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:29.349029   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:29.349037   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:29.349048   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:29.349056   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:29.349157   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:29.349174   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:29.350009   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Closing plugin on server side
	I0813 20:09:29.350016   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:29.350029   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:29.350038   31202 addons.go:313] Verifying addon metrics-server=true in "addons-20210813200811-30853"
	I0813 20:09:29.672162   31202 pod_ready.go:92] pod "kube-scheduler-addons-20210813200811-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:09:29.672179   31202 pod_ready.go:81] duration metric: took 341.26378ms waiting for pod "kube-scheduler-addons-20210813200811-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:29.672188   31202 pod_ready.go:38] duration metric: took 5.441380102s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:09:29.672205   31202 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:09:29.672245   31202 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:09:34.626787   31202 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.142474285s)
	I0813 20:09:34.626809   31202 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (10.169849706s)
	I0813 20:09:34.626845   31202 main.go:130] libmachine: Making call to close driver server
	W0813 20:09:34.626861   31202 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0813 20:09:34.626870   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:34.626894   31202 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0813 20:09:34.626961   31202 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (9.993470938s)
	I0813 20:09:34.627000   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:34.627043   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:34.627071   31202 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.505835292s)
	W0813 20:09:34.627103   31202 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0813 20:09:34.627122   31202 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0813 20:09:34.627238   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:34.627267   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:34.627241   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Closing plugin on server side
	I0813 20:09:34.627333   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:34.627349   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:34.627359   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:34.627370   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:34.627410   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:34.627431   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:34.627609   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Closing plugin on server side
	I0813 20:09:34.627612   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:34.627686   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:34.627697   31202 addons.go:313] Verifying addon ingress=true in "addons-20210813200811-30853"
	I0813 20:09:34.629659   31202 out.go:177] * Verifying ingress addon...
	I0813 20:09:34.627640   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Closing plugin on server side
	I0813 20:09:34.627659   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:34.629778   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:34.629799   31202 addons.go:313] Verifying addon registry=true in "addons-20210813200811-30853"
	I0813 20:09:34.631385   31202 out.go:177] * Verifying registry addon...
	I0813 20:09:34.631530   31202 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0813 20:09:34.633291   31202 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0813 20:09:34.714900   31202 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0813 20:09:34.714922   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:34.723894   31202 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0813 20:09:34.723911   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:34.904219   31202 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0813 20:09:34.987781   31202 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 20:09:35.293052   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:35.301330   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:35.784510   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:35.791325   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:36.253145   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:36.253438   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:36.331666   31202 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.978368098s)
	I0813 20:09:36.331725   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:36.331738   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:36.331682   31202 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.659417729s)
	I0813 20:09:36.331817   31202 api_server.go:70] duration metric: took 12.578490595s to wait for apiserver process to appear ...
	I0813 20:09:36.331833   31202 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:09:36.331851   31202 api_server.go:239] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0813 20:09:36.332004   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:36.332020   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:36.332031   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:36.332042   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:36.332313   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:36.332332   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:36.332344   31202 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210813200811-30853"
	I0813 20:09:36.334467   31202 out.go:177] * Verifying csi-hostpath-driver addon...
	I0813 20:09:36.336329   31202 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0813 20:09:36.405303   31202 api_server.go:265] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0813 20:09:36.405933   31202 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0813 20:09:36.405964   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:36.435740   31202 api_server.go:139] control plane version: v1.21.3
	I0813 20:09:36.435776   31202 api_server.go:129] duration metric: took 103.931627ms to wait for apiserver health ...
	I0813 20:09:36.435788   31202 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:09:36.448613   31202 system_pods.go:59] 18 kube-system pods found
	I0813 20:09:36.448657   31202 system_pods.go:61] "coredns-558bd4d5db-gfxc4" [23bd3629-43d4-4649-b36c-2b3a94e87aa9] Running
	I0813 20:09:36.448670   31202 system_pods.go:61] "csi-hostpath-attacher-0" [4456fd59-fa54-4d29-b478-ffb6615cc1b5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) didn't match pod affinity rules, 1 node(s) didn't match pod affinity/anti-affinity rules.)
	I0813 20:09:36.448684   31202 system_pods.go:61] "csi-hostpath-provisioner-0" [3640902d-c0be-44ba-9d53-f888702d8b5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I0813 20:09:36.448694   31202 system_pods.go:61] "csi-hostpath-resizer-0" [fd8a24f6-a473-41d2-bd8e-56adf37f33ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0813 20:09:36.448708   31202 system_pods.go:61] "csi-hostpath-snapshotter-0" [45fe597b-bfb9-4a56-8ab4-82e31052f365] Pending
	I0813 20:09:36.448721   31202 system_pods.go:61] "csi-hostpathplugin-0" [65744517-cccd-4766-9501-26eb0ffd6962] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0813 20:09:36.448731   31202 system_pods.go:61] "etcd-addons-20210813200811-30853" [b6a41140-5e92-487f-9195-f893f1a25736] Running
	I0813 20:09:36.448743   31202 system_pods.go:61] "kube-apiserver-addons-20210813200811-30853" [d5158552-87af-46e6-a553-9068b411a08b] Running
	I0813 20:09:36.448752   31202 system_pods.go:61] "kube-controller-manager-addons-20210813200811-30853" [2c8d2556-1edb-42e6-ab70-c5c067fa449b] Running
	I0813 20:09:36.448759   31202 system_pods.go:61] "kube-proxy-kdr8f" [71cbbfcc-fa2b-4052-b8d1-6c8cb701e72f] Running
	I0813 20:09:36.448763   31202 system_pods.go:61] "kube-scheduler-addons-20210813200811-30853" [e70f42d6-f2b1-43a4-b29b-778c80623bed] Running
	I0813 20:09:36.448775   31202 system_pods.go:61] "metrics-server-77c99ccb96-gnqsc" [c3871703-1162-4d76-bf75-ce2c9fa75212] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:09:36.448787   31202 system_pods.go:61] "registry-h6s98" [1829875c-4f3b-483e-8582-350974b1fece] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 20:09:36.448800   31202 system_pods.go:61] "registry-proxy-h8lsg" [acb087eb-33aa-47b9-8ccd-ecea64c4ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 20:09:36.448813   31202 system_pods.go:61] "snapshot-controller-989f9ddc8-sxvj5" [55a8491c-98b9-4fba-b435-71e28bf57dcc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 20:09:36.448828   31202 system_pods.go:61] "snapshot-controller-989f9ddc8-vwk22" [5f5fcf8f-f47d-4003-90db-51cd367870ec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 20:09:36.448843   31202 system_pods.go:61] "storage-provisioner" [c45bfeb9-65a1-4eba-9a76-fdf6d0d99b64] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:09:36.448854   31202 system_pods.go:61] "tiller-deploy-768d69497-bmgs8" [b27137ea-ef0f-44d5-9fd1-42ec9aa91f1e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0813 20:09:36.448865   31202 system_pods.go:74] duration metric: took 13.069196ms to wait for pod list to return data ...
	I0813 20:09:36.448879   31202 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:09:36.454198   31202 default_sa.go:45] found service account: "default"
	I0813 20:09:36.454217   31202 default_sa.go:55] duration metric: took 5.329222ms for default service account to be created ...
	I0813 20:09:36.454226   31202 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:09:36.464696   31202 system_pods.go:86] 18 kube-system pods found
	I0813 20:09:36.464754   31202 system_pods.go:89] "coredns-558bd4d5db-gfxc4" [23bd3629-43d4-4649-b36c-2b3a94e87aa9] Running
	I0813 20:09:36.464767   31202 system_pods.go:89] "csi-hostpath-attacher-0" [4456fd59-fa54-4d29-b478-ffb6615cc1b5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) didn't match pod affinity rules, 1 node(s) didn't match pod affinity/anti-affinity rules.)
	I0813 20:09:36.464784   31202 system_pods.go:89] "csi-hostpath-provisioner-0" [3640902d-c0be-44ba-9d53-f888702d8b5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I0813 20:09:36.464797   31202 system_pods.go:89] "csi-hostpath-resizer-0" [fd8a24f6-a473-41d2-bd8e-56adf37f33ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0813 20:09:36.464809   31202 system_pods.go:89] "csi-hostpath-snapshotter-0" [45fe597b-bfb9-4a56-8ab4-82e31052f365] Pending
	I0813 20:09:36.464824   31202 system_pods.go:89] "csi-hostpathplugin-0" [65744517-cccd-4766-9501-26eb0ffd6962] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0813 20:09:36.464848   31202 system_pods.go:89] "etcd-addons-20210813200811-30853" [b6a41140-5e92-487f-9195-f893f1a25736] Running
	I0813 20:09:36.464858   31202 system_pods.go:89] "kube-apiserver-addons-20210813200811-30853" [d5158552-87af-46e6-a553-9068b411a08b] Running
	I0813 20:09:36.464871   31202 system_pods.go:89] "kube-controller-manager-addons-20210813200811-30853" [2c8d2556-1edb-42e6-ab70-c5c067fa449b] Running
	I0813 20:09:36.464882   31202 system_pods.go:89] "kube-proxy-kdr8f" [71cbbfcc-fa2b-4052-b8d1-6c8cb701e72f] Running
	I0813 20:09:36.464894   31202 system_pods.go:89] "kube-scheduler-addons-20210813200811-30853" [e70f42d6-f2b1-43a4-b29b-778c80623bed] Running
	I0813 20:09:36.464906   31202 system_pods.go:89] "metrics-server-77c99ccb96-gnqsc" [c3871703-1162-4d76-bf75-ce2c9fa75212] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:09:36.464929   31202 system_pods.go:89] "registry-h6s98" [1829875c-4f3b-483e-8582-350974b1fece] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 20:09:36.464946   31202 system_pods.go:89] "registry-proxy-h8lsg" [acb087eb-33aa-47b9-8ccd-ecea64c4ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 20:09:36.464961   31202 system_pods.go:89] "snapshot-controller-989f9ddc8-sxvj5" [55a8491c-98b9-4fba-b435-71e28bf57dcc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 20:09:36.464976   31202 system_pods.go:89] "snapshot-controller-989f9ddc8-vwk22" [5f5fcf8f-f47d-4003-90db-51cd367870ec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 20:09:36.464992   31202 system_pods.go:89] "storage-provisioner" [c45bfeb9-65a1-4eba-9a76-fdf6d0d99b64] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:09:36.465008   31202 system_pods.go:89] "tiller-deploy-768d69497-bmgs8" [b27137ea-ef0f-44d5-9fd1-42ec9aa91f1e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0813 20:09:36.465022   31202 system_pods.go:126] duration metric: took 10.790367ms to wait for k8s-apps to be running ...
	I0813 20:09:36.465037   31202 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:09:36.465092   31202 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:09:36.722375   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:36.729756   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:36.915998   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:37.263670   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:37.264067   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:37.424687   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:37.722925   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:37.728363   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:37.916492   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:38.219924   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:38.230075   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:38.416234   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:38.725502   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:38.731465   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:38.916265   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:39.220360   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:39.234146   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:39.422049   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:39.732166   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:39.732279   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:39.915540   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:40.220889   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:40.228304   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:40.421757   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:40.432283   31202 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (5.528008216s)
	I0813 20:09:40.432331   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:40.432346   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:40.432365   31202 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.444550472s)
	I0813 20:09:40.432395   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:40.432412   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:40.432403   31202 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (3.967285332s)
	I0813 20:09:40.432465   31202 system_svc.go:56] duration metric: took 3.967425892s WaitForService to wait for kubelet.
	I0813 20:09:40.432475   31202 kubeadm.go:547] duration metric: took 16.679152188s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:09:40.432501   31202 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:09:40.433538   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Closing plugin on server side
	I0813 20:09:40.433548   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:40.433538   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:40.433565   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:40.433571   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:40.433576   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:40.433549   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Closing plugin on server side
	I0813 20:09:40.433582   31202 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:40.433585   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:40.433598   31202 main.go:130] libmachine: (addons-20210813200811-30853) Calling .Close
	I0813 20:09:40.433901   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Closing plugin on server side
	I0813 20:09:40.433908   31202 main.go:130] libmachine: (addons-20210813200811-30853) DBG | Closing plugin on server side
	I0813 20:09:40.433907   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:40.433949   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:40.434839   31202 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:40.434875   31202 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:40.437800   31202 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:09:40.437826   31202 node_conditions.go:123] node cpu capacity is 2
	I0813 20:09:40.437840   31202 node_conditions.go:105] duration metric: took 5.333666ms to run NodePressure ...
	I0813 20:09:40.437849   31202 start.go:231] waiting for startup goroutines ...
	I0813 20:09:40.722115   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:40.737840   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:40.916031   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:41.230777   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:41.231041   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:41.414140   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:41.720539   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:41.728342   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:41.914293   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:42.226629   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:42.237936   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:42.420153   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:42.722931   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:42.729084   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:42.918813   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:43.224088   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:43.232203   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:43.418492   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:43.728571   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:43.734358   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:43.912798   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:44.222097   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:44.229639   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:44.411351   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:44.721174   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:44.729464   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:44.913395   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:45.220502   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:45.228479   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:45.412343   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:45.731442   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:45.738532   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:45.915174   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:46.220783   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:46.228984   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:46.415264   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:46.724054   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:46.729899   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:46.912619   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:47.259422   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:47.259637   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:47.430811   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:47.720219   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:47.731935   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:47.917705   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:48.219470   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:48.233222   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:48.412390   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:48.719535   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:48.730388   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:48.912420   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:49.222614   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:49.230991   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:49.414159   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:49.720854   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:49.753194   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:50.689259   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:50.689413   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:50.690440   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:50.719214   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:50.732865   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:50.912570   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:51.219921   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:51.234558   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:51.413951   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:51.720130   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:51.729241   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:51.914476   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:52.224731   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:52.228912   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:52.413265   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:52.721703   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:52.728627   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:52.913025   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:53.226025   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:53.230725   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:53.420350   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:53.721875   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:53.737332   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:53.917355   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:54.221170   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:54.229472   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:54.411350   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:54.721829   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:54.740825   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:54.919696   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:55.257674   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:55.257839   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:55.463326   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:55.850478   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:55.851043   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:55.956960   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:56.223669   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:56.229147   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:56.412147   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:56.722976   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:56.730514   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:56.912832   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:57.222933   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:57.230178   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:57.411649   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:57.720785   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:57.728264   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:57.912181   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:58.220300   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:58.235291   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:58.413097   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:58.733441   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:58.746532   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:58.911276   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:59.223068   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:59.242909   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:00.028291   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:00.028680   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:00.033989   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:00.219667   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:00.240669   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:00.414401   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:00.724045   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:00.746273   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:00.911406   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:01.221400   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:01.230126   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:01.411947   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:01.720211   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:01.730749   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:01.911977   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:02.219797   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:02.228470   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:02.417623   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:02.719595   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:02.729748   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:02.912201   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:03.220835   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:03.228921   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:03.414342   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:03.720043   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:03.728900   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:03.913564   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:04.220296   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:04.229843   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:04.415998   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:04.721268   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:04.728570   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:04.911235   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:05.223864   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:05.235134   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:05.414493   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:05.719568   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:05.741022   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:05.949973   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:06.222531   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:06.229781   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:06.567159   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:06.721219   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:06.729025   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:06.912870   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:07.219834   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:07.227986   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:07.412967   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:07.721136   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:07.729523   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:07.913396   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:08.222081   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:08.229448   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:08.418180   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:08.719448   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:08.730028   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:08.913707   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:09.222863   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:09.228247   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:09.412589   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:09.721726   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:09.728614   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:09.912132   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:10.222302   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:10.230127   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:10.413403   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:10.721225   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:10.729489   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:10.919787   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:11.221838   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:11.229396   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:11.411999   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:11.720894   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:11.737842   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:11.912702   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:12.219205   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:12.229835   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:12.420507   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:12.719327   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:12.729314   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:12.911031   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:13.220290   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:13.228810   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:13.416933   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:13.723369   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:13.731767   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:13.912844   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:14.219997   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:14.234211   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:14.414757   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:14.720120   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:14.740308   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:14.911670   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:15.220869   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:15.229471   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:15.411875   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:15.720987   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:15.730503   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:15.913462   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:16.220029   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:16.231192   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:16.414430   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:16.719352   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:16.734617   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:16.912735   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:17.228263   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:17.237055   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:17.413523   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:17.719763   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:17.729766   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:17.915396   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:18.227556   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:18.261897   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:18.416028   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:18.722611   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:18.735373   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:18.913850   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:19.219962   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:19.241381   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:19.412821   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:19.719662   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:19.728680   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:19.911844   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:20.219548   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:20.229070   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:20.414245   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:20.720417   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:20.731503   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:20.915116   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:21.220880   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:21.231126   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:21.829456   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:21.832847   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:21.832880   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:21.917092   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:22.220267   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:22.238790   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:22.411058   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:22.719943   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:22.731274   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:22.916000   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:23.220833   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:23.229359   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:23.412398   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:23.723861   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:23.729339   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:23.923159   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:24.220613   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:24.229195   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:24.425535   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:24.719309   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:24.736052   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:24.912401   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:25.218433   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:25.230028   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:25.412521   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:25.722751   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:25.730266   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:25.919594   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:26.218646   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:26.228585   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:26.411244   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:26.720633   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:26.730208   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:26.912115   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:27.220236   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:27.232983   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:27.413283   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:27.720315   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:27.729260   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:27.911408   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:28.219681   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:28.228327   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:28.411182   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:28.719234   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:28.728464   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:28.911595   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:29.220916   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:29.229000   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:29.412162   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:29.719828   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:29.727976   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:29.911408   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:30.220220   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:30.229775   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:30.411642   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:30.721609   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:30.729108   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:30.915485   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:31.218953   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:31.229587   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:31.412179   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:31.721975   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:31.733881   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:31.917348   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:32.220290   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:32.228946   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:32.412166   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:32.719178   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:32.729241   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:32.914561   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:33.219271   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:33.230085   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:33.874280   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:33.874286   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:33.878766   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:33.914539   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:34.219631   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:34.229358   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:34.432824   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:34.719387   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:34.729961   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:34.912641   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:35.220188   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:35.228663   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:35.420546   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:35.720887   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:35.730031   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:35.939238   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:36.219510   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:36.229163   31202 kapi.go:108] duration metric: took 1m1.595866433s to wait for kubernetes.io/minikube-addons=registry ...
	I0813 20:10:36.415627   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:36.718995   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:36.928971   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:37.219222   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:37.438156   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:37.722066   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:37.916507   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:38.220410   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:38.413179   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:38.719404   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:38.912246   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:39.219038   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:39.429716   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:39.839950   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:39.915909   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:40.224637   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:40.413975   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:40.720038   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:40.912353   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:41.220400   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:41.414646   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:41.718740   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:41.911579   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:42.220290   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:42.413621   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:42.719216   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:42.913336   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:43.220090   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:43.412090   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:43.719832   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:43.911780   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:44.220074   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:44.412837   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:44.719410   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:44.912203   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:45.219196   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:45.411636   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:45.719058   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:45.913482   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:46.219695   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:46.412993   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:46.719703   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:46.912881   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:47.219225   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:47.419533   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:47.721744   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:47.911954   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:48.219531   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:48.412197   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:48.721548   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:48.911584   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:49.219880   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:49.412125   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:49.723391   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:49.912316   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:50.221550   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:50.563666   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:50.722654   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:50.917509   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:51.220164   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:51.417695   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:51.719743   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:51.914443   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:52.219190   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:52.419514   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:52.719731   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:52.917362   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:53.219748   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:53.415321   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:53.720922   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:53.912748   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:54.222238   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:54.412263   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:54.719767   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:54.912276   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:55.220690   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:55.424167   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:55.732788   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:55.911685   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:56.221776   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:56.416047   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:56.723188   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:56.921634   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:57.221151   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:57.421565   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:57.725062   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:57.917390   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:58.224182   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:58.414768   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:58.734763   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:58.916900   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:59.222271   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:59.418044   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:59.724939   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:59.918472   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:00.220529   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:00.412744   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:00.729747   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:00.915356   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:01.250705   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:01.417212   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:01.724389   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:01.917025   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:02.219380   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:02.415690   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:02.720671   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:02.912353   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:03.223425   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:03.415949   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:03.723887   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:03.916673   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:04.225346   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:04.417120   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:04.723429   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:04.917387   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:05.226395   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:05.412292   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:05.720511   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:05.912975   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:06.220913   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:06.413100   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:06.720704   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:06.913523   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:07.220296   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:07.413014   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:07.721865   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:07.922071   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:08.220648   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:08.413326   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:08.723520   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:08.916654   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:09.222550   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:09.413620   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:09.719906   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:09.912443   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:10.222783   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:10.412768   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:10.720120   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:10.913317   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:11.219757   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:11.412749   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:11.721627   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:11.912808   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:12.746483   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:12.753274   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:12.918109   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:13.221975   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:13.413275   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:13.720338   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:13.912842   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:14.220221   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:14.419624   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:14.719264   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:14.914674   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:15.224762   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:15.416050   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:15.721652   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:15.919593   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:16.221680   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:16.424081   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:16.720880   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:16.913710   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:17.224490   31202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:17.413805   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:17.722476   31202 kapi.go:108] duration metric: took 1m43.090942801s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0813 20:11:17.911553   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:18.412574   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:18.913769   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:19.413832   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:19.913975   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:20.417455   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:20.913614   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:21.413918   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:21.912393   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:22.431636   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:22.914223   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:23.413826   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:23.912018   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:24.413693   31202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:24.926664   31202 kapi.go:108] duration metric: took 1m48.590323272s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0813 20:11:24.928553   31202 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, helm-tiller, metrics-server, volumesnapshots, olm, registry, ingress, csi-hostpath-driver
	I0813 20:11:24.928587   31202 addons.go:344] enableAddons completed in 2m1.17513083s
	I0813 20:11:24.972903   31202 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:11:24.974943   31202 out.go:177] * Done! kubectl is now configured to use "addons-20210813200811-30853" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:08:23 UTC, end at Fri 2021-08-13 20:17:05 UTC. --
	Aug 13 20:17:04 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:04.903566842Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fbcf2eb2-c97c-4c31-94c2-c2c3571b3050 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:04 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:04.903858252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fbcf2eb2-c97c-4c31-94c2-c2c3571b3050 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:04 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:04.904800141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a6e56cac6f0ecc658abde4d6754f87b832294e6ed2311c46128b127098cf392,PodSandboxId:0e20abe434cee8ceef97db18a6f7c1e78242c551de0848aa4fbdb50403e9bbb2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628885592421439880,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c4e23a2-c3ff-4a34-a0ac-c6460e9e726c,},Annotations:map[string]string{io.kubernetes.container.hash: 68fa41f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b2e2d11e415c69ded95530eaf8ac12f2f33a681caf846ed1f8832229b871bd,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885582531841557,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.container.hash:
e0838b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f91e2863088d53f8c13a80604ef0ade825100d23ff7b9e07e35c9d525f58cc6,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885582136319196,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.container.hash: 5f6
6b328,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2410665dbb51667ab76511afe66f3cbbe4167d05cb3425c3c841757f4c6bb01,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885581492508684,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.
container.hash: c794ea38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732fcfa3301b11ca797ef1f40584382eedbb8e50c9889b6d906fd04162970153,PodSandboxId:1bff9f733c091126f30ec7339cf04cabbda87d75336de8966386d4fe8d120b46,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628885571047438782,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-xjxw9,io.kubernetes.pod.namespace: defa
ult,io.kubernetes.pod.uid: d4598781-17d6-40dd-8760-b02c4e9c31f7,},Annotations:map[string]string{io.kubernetes.container.hash: edfc9f52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171213d654a90c7bbc4872a7ecb10fac6504cc3d70439127f28f43a26434f4ba,PodSandboxId:e62e1b486d55b570a8a2f7394561524a8b64dde5bdd2b5bd2df1144dd359b59c,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628885552128088825,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-fzcpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d9807aa4-fbf0-4882-8430-19ab67fd1b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 256640a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6687f0f80b60954eaedf2efe71e7de52958317df7dc1ccb6e5443381c4953c9b,PodSandboxId:98aa733c7601bbebf0c032a744f0f9fcdfeb7115c8e2b90b91da9b129ce4e858,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628885519034486969,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1b84144-d4f0-4c62-ac7b-9ed065e6f23c,},Annotations:map[string]string{io.kubernetes.container.hash: e3c63d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0b4babb1a378ff636eec5fba1dcf45135f2aa947c9d764b227996cc05158f0,PodSandboxId:13bf53db9c54ca86ca04b306592f8736552ee1e9f5d83d018a65b6f9a6677076,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885451767324034,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes
.pod.name: packageserver-67fc94bc46-j8jrl,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: ee06352d-ffa1-463f-b2e6-232ca6dbe2dd,},Annotations:map[string]string{io.kubernetes.container.hash: aded265a,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b674ce9e6503ea0f026f4eb28fa58bc26bb22d08f9d4ff6450d5039e6057615a,PodSandboxId:c5aff317bd1767ac51decb45290816ed15ce3c69356efe0e8a9d712e73c58637,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNI
NG,CreatedAt:1628885450837523012,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-67fc94bc46-r9m4b,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 6672f414-2c72-45f1-be32-f615183e971b,},Annotations:map[string]string{io.kubernetes.container.hash: 27693a1c,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87992a088c0223dab9b2a0823e95bdceafbc3c48587395f05f49fef52c7ca9f0,PodSandboxId:3eddc248ebd2babf8a8d6854c734c345d1935b4e9cd964339c742430811d9e75,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]st
ring{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628885449892640125,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-cqbtm,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f61fde8c-0bdf-4e93-a96f-2181f2d62fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7423d1,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93f08adfb4e0babcc9a23276d4e7c2facc57bac633a8425d7b21d1db2752fbb,PodSandboxId:747c4c5377b5cfe0e82b0aff107bb8d5b91c9b1cea132708e034d74274b89fa7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628885436575385240,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7vftf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 139ecada-0447-4ba8-8167-185fe7b13e57,},Annotations:map[string]string{io.kubernetes.container.hash: 7474616,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62d6bef90f6b895dd69fd4482a62667c6f78e467ab62fcc14e28a2ce965e6b56,PodSandboxId:64cbdfd5ccccf64d6ab399f634303d578f339ebf790dc83a5b8c923560e0128d,Metadata:&ContainerMetadata{Name:crea
te,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628885405297328107,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lrs4t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd4f7bc3-556e-4421-bdf7-5a4ddba42249,},Annotations:map[string]string{io.kubernetes.container.hash: 41611cb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb00e59fba031928ae4feff1e5851d749e434f7ec13a746b9878587074a9a372,PodSandboxId:45421eafadb6365f77ca31bc886cf366ac6e8aa09e5c0bd22279e2127fe48ea4,M
etadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885394203618416,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-sbs5r,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70f59bff-f3c5-42c3-9910-5716403e87f0,},Annotations:map[string]string{io.kubernetes.container.hash: c85db323,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:19a00c14da08f1a6b52015584f10099b995a4f111fa6545c31dbdc5f29ba0bd1,PodSandboxId:68e2b3fb3cd572acce3e72f6f991c8abed3c1da1365d22d5c9f907808f9d6944,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885393247502311,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-mwtwb,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 24e09dae-f72b-40a4-80cb-f5ecbbf0ca7f,},Annotations:map[string]string{io.kubernetes.container.hash: 7b60f4bd,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68e4225627dab413969b2614955db17c8b906f1c26a4b4bf668a3a233822145,PodSandboxId:7e9a3a809d3018e4ca0067a1fb1ad0fcd771161b867c75faf6a9e2fdce82b372,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628885383394819197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45bfeb9-65a1-4eba-9a76-fdf6d0d99b64,},Annotations:map[string]string{io.kubernetes.container.hash: 1f79c2dc,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1078e6d14ed625cd64b8a3a86647582e229c20a16d3ac89d2544507b5c85ea0,PodSandboxId:f78d9d73d77a92147abaae5f88570d446bb5d09f6f9179e1671e54361b31a6e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628885365925593862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-gfxc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23bd3629-43d4-4649-b36c-2b3a94e87aa9,},Annotations:map[string]string{io.kubernetes.container.hash: a16fd347,io.kubernetes.container.ports: [{\"n
ame\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d63b71160ddb7d1df0458319f5b52c4dbd2ad5a9acffa6b1fdbd601418bd0b2,PodSandboxId:431924b542fbffa1243b60109d9642811ca28a925170faff0bcbc1eecc5652d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628885364890958243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kdr8f,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 71cbbfcc-fa2b-4052-b8d1-6c8cb701e72f,},Annotations:map[string]string{io.kubernetes.container.hash: de6f9d4d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb303098506cea6f7aaedf8e9ba9ce8bc7225341e8ba95edf52e50c3f0ac820,PodSandboxId:78eb05a2a350b9db310337d95d57a52122a90c95ebef35863ce345f2c09eb145,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628885342367309898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
df0a5d900c5bd0f8ab799d8e76acd48c,},Annotations:map[string]string{io.kubernetes.container.hash: 645db5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa933bbd49bf519154fa65f04978f2b5a3fde3078ac61373541486222ccdfb09,PodSandboxId:85eb3ff7ec77a2353bd03065bd37b1711638241fb77738188cabacfbd552793c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628885342327696058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ae03
eda7d24fd27d1ec6eae9ea90ee,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6482aea484cc8b73dffb03472fc0cd525e15334dec5017662b6eeb69183e72b0,PodSandboxId:87d24b893d3cd7c5e3d2d28a9420f0c06573aac9455b06830f5c5406e7f85d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628885341922998495,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: b4a8f333b7cf937a5265cc85b36be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f5386975c2ba60d55783b65e8928c6ce8eb3df5f1269c454a1966b3ab0b2a34,PodSandboxId:3d8c441b9e85e6a9e3cef6dc4476d00e766eefb8e629b4a98350813d4f2bbb02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628885341516275791,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: fbba398392a43db09d89bcc8d1904990,},Annotations:map[string]string{io.kubernetes.container.hash: 8b548f47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fbcf2eb2-c97c-4c31-94c2-c2c3571b3050 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:04 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:04.947175735Z" level=debug msg="Received container exit code: 0, message: " file="oci/runtime_oci.go:495"
	Aug 13 20:17:04 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:04.947614883Z" level=debug msg="Response: &ExecSyncResponse{Stdout:[],Stderr:[FILTERED],ExitCode:0,}" file="go-grpc-middleware/chain.go:25" id=3354860d-1aee-4940-a095-105f8361e0f7 name=/runtime.v1alpha2.RuntimeService/ExecSync
	Aug 13 20:17:04 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:04.969249442Z" level=debug msg="Received container exit code: 0, message: " file="oci/runtime_oci.go:495"
	Aug 13 20:17:04 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:04.969616329Z" level=debug msg="Response: &ExecSyncResponse{Stdout:[],Stderr:[FILTERED],ExitCode:0,}" file="go-grpc-middleware/chain.go:25" id=5a5b016f-7922-424b-bac6-a81a0a3a28c4 name=/runtime.v1alpha2.RuntimeService/ExecSync
	Aug 13 20:17:04 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:04.989573243Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d5c3efd1-a5c6-49bb-bca3-0859d7115596 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:04 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:04.989729861Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d5c3efd1-a5c6-49bb-bca3-0859d7115596 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:04 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:04.990328846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a6e56cac6f0ecc658abde4d6754f87b832294e6ed2311c46128b127098cf392,PodSandboxId:0e20abe434cee8ceef97db18a6f7c1e78242c551de0848aa4fbdb50403e9bbb2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628885592421439880,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c4e23a2-c3ff-4a34-a0ac-c6460e9e726c,},Annotations:map[string]string{io.kubernetes.container.hash: 68fa41f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b2e2d11e415c69ded95530eaf8ac12f2f33a681caf846ed1f8832229b871bd,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885582531841557,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.container.hash:
e0838b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f91e2863088d53f8c13a80604ef0ade825100d23ff7b9e07e35c9d525f58cc6,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885582136319196,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.container.hash: 5f6
6b328,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2410665dbb51667ab76511afe66f3cbbe4167d05cb3425c3c841757f4c6bb01,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885581492508684,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.
container.hash: c794ea38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732fcfa3301b11ca797ef1f40584382eedbb8e50c9889b6d906fd04162970153,PodSandboxId:1bff9f733c091126f30ec7339cf04cabbda87d75336de8966386d4fe8d120b46,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628885571047438782,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-xjxw9,io.kubernetes.pod.namespace: defa
ult,io.kubernetes.pod.uid: d4598781-17d6-40dd-8760-b02c4e9c31f7,},Annotations:map[string]string{io.kubernetes.container.hash: edfc9f52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171213d654a90c7bbc4872a7ecb10fac6504cc3d70439127f28f43a26434f4ba,PodSandboxId:e62e1b486d55b570a8a2f7394561524a8b64dde5bdd2b5bd2df1144dd359b59c,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628885552128088825,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-fzcpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d9807aa4-fbf0-4882-8430-19ab67fd1b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 256640a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6687f0f80b60954eaedf2efe71e7de52958317df7dc1ccb6e5443381c4953c9b,PodSandboxId:98aa733c7601bbebf0c032a744f0f9fcdfeb7115c8e2b90b91da9b129ce4e858,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628885519034486969,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1b84144-d4f0-4c62-ac7b-9ed065e6f23c,},Annotations:map[string]string{io.kubernetes.container.hash: e3c63d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0b4babb1a378ff636eec5fba1dcf45135f2aa947c9d764b227996cc05158f0,PodSandboxId:13bf53db9c54ca86ca04b306592f8736552ee1e9f5d83d018a65b6f9a6677076,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885451767324034,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes
.pod.name: packageserver-67fc94bc46-j8jrl,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: ee06352d-ffa1-463f-b2e6-232ca6dbe2dd,},Annotations:map[string]string{io.kubernetes.container.hash: aded265a,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b674ce9e6503ea0f026f4eb28fa58bc26bb22d08f9d4ff6450d5039e6057615a,PodSandboxId:c5aff317bd1767ac51decb45290816ed15ce3c69356efe0e8a9d712e73c58637,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNI
NG,CreatedAt:1628885450837523012,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-67fc94bc46-r9m4b,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 6672f414-2c72-45f1-be32-f615183e971b,},Annotations:map[string]string{io.kubernetes.container.hash: 27693a1c,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87992a088c0223dab9b2a0823e95bdceafbc3c48587395f05f49fef52c7ca9f0,PodSandboxId:3eddc248ebd2babf8a8d6854c734c345d1935b4e9cd964339c742430811d9e75,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]st
ring{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628885449892640125,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-cqbtm,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f61fde8c-0bdf-4e93-a96f-2181f2d62fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7423d1,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93f08adfb4e0babcc9a23276d4e7c2facc57bac633a8425d7b21d1db2752fbb,PodSandboxId:747c4c5377b5cfe0e82b0aff107bb8d5b91c9b1cea132708e034d74274b89fa7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628885436575385240,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7vftf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 139ecada-0447-4ba8-8167-185fe7b13e57,},Annotations:map[string]string{io.kubernetes.container.hash: 7474616,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62d6bef90f6b895dd69fd4482a62667c6f78e467ab62fcc14e28a2ce965e6b56,PodSandboxId:64cbdfd5ccccf64d6ab399f634303d578f339ebf790dc83a5b8c923560e0128d,Metadata:&ContainerMetadata{Name:crea
te,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628885405297328107,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lrs4t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd4f7bc3-556e-4421-bdf7-5a4ddba42249,},Annotations:map[string]string{io.kubernetes.container.hash: 41611cb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb00e59fba031928ae4feff1e5851d749e434f7ec13a746b9878587074a9a372,PodSandboxId:45421eafadb6365f77ca31bc886cf366ac6e8aa09e5c0bd22279e2127fe48ea4,M
etadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885394203618416,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-sbs5r,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70f59bff-f3c5-42c3-9910-5716403e87f0,},Annotations:map[string]string{io.kubernetes.container.hash: c85db323,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:19a00c14da08f1a6b52015584f10099b995a4f111fa6545c31dbdc5f29ba0bd1,PodSandboxId:68e2b3fb3cd572acce3e72f6f991c8abed3c1da1365d22d5c9f907808f9d6944,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885393247502311,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-mwtwb,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 24e09dae-f72b-40a4-80cb-f5ecbbf0ca7f,},Annotations:map[string]string{io.kubernetes.container.hash: 7b60f4bd,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68e4225627dab413969b2614955db17c8b906f1c26a4b4bf668a3a233822145,PodSandboxId:7e9a3a809d3018e4ca0067a1fb1ad0fcd771161b867c75faf6a9e2fdce82b372,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628885383394819197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45bfeb9-65a1-4eba-9a76-fdf6d0d99b64,},Annotations:map[string]string{io.kubernetes.container.hash: 1f79c2dc,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1078e6d14ed625cd64b8a3a86647582e229c20a16d3ac89d2544507b5c85ea0,PodSandboxId:f78d9d73d77a92147abaae5f88570d446bb5d09f6f9179e1671e54361b31a6e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628885365925593862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-gfxc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23bd3629-43d4-4649-b36c-2b3a94e87aa9,},Annotations:map[string]string{io.kubernetes.container.hash: a16fd347,io.kubernetes.container.ports: [{\"n
ame\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d63b71160ddb7d1df0458319f5b52c4dbd2ad5a9acffa6b1fdbd601418bd0b2,PodSandboxId:431924b542fbffa1243b60109d9642811ca28a925170faff0bcbc1eecc5652d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628885364890958243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kdr8f,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 71cbbfcc-fa2b-4052-b8d1-6c8cb701e72f,},Annotations:map[string]string{io.kubernetes.container.hash: de6f9d4d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb303098506cea6f7aaedf8e9ba9ce8bc7225341e8ba95edf52e50c3f0ac820,PodSandboxId:78eb05a2a350b9db310337d95d57a52122a90c95ebef35863ce345f2c09eb145,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628885342367309898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
df0a5d900c5bd0f8ab799d8e76acd48c,},Annotations:map[string]string{io.kubernetes.container.hash: 645db5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa933bbd49bf519154fa65f04978f2b5a3fde3078ac61373541486222ccdfb09,PodSandboxId:85eb3ff7ec77a2353bd03065bd37b1711638241fb77738188cabacfbd552793c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628885342327696058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ae03
eda7d24fd27d1ec6eae9ea90ee,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6482aea484cc8b73dffb03472fc0cd525e15334dec5017662b6eeb69183e72b0,PodSandboxId:87d24b893d3cd7c5e3d2d28a9420f0c06573aac9455b06830f5c5406e7f85d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628885341922998495,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: b4a8f333b7cf937a5265cc85b36be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f5386975c2ba60d55783b65e8928c6ce8eb3df5f1269c454a1966b3ab0b2a34,PodSandboxId:3d8c441b9e85e6a9e3cef6dc4476d00e766eefb8e629b4a98350813d4f2bbb02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628885341516275791,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: fbba398392a43db09d89bcc8d1904990,},Annotations:map[string]string{io.kubernetes.container.hash: 8b548f47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d5c3efd1-a5c6-49bb-bca3-0859d7115596 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.026088254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4eddb6ed-4f84-42b4-bcfd-a67feef94f26 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.026556317Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4eddb6ed-4f84-42b4-bcfd-a67feef94f26 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.028072532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a6e56cac6f0ecc658abde4d6754f87b832294e6ed2311c46128b127098cf392,PodSandboxId:0e20abe434cee8ceef97db18a6f7c1e78242c551de0848aa4fbdb50403e9bbb2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628885592421439880,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c4e23a2-c3ff-4a34-a0ac-c6460e9e726c,},Annotations:map[string]string{io.kubernetes.container.hash: 68fa41f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b2e2d11e415c69ded95530eaf8ac12f2f33a681caf846ed1f8832229b871bd,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885582531841557,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.container.hash:
e0838b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f91e2863088d53f8c13a80604ef0ade825100d23ff7b9e07e35c9d525f58cc6,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885582136319196,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.container.hash: 5f6
6b328,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2410665dbb51667ab76511afe66f3cbbe4167d05cb3425c3c841757f4c6bb01,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885581492508684,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.
container.hash: c794ea38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732fcfa3301b11ca797ef1f40584382eedbb8e50c9889b6d906fd04162970153,PodSandboxId:1bff9f733c091126f30ec7339cf04cabbda87d75336de8966386d4fe8d120b46,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628885571047438782,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-xjxw9,io.kubernetes.pod.namespace: defa
ult,io.kubernetes.pod.uid: d4598781-17d6-40dd-8760-b02c4e9c31f7,},Annotations:map[string]string{io.kubernetes.container.hash: edfc9f52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171213d654a90c7bbc4872a7ecb10fac6504cc3d70439127f28f43a26434f4ba,PodSandboxId:e62e1b486d55b570a8a2f7394561524a8b64dde5bdd2b5bd2df1144dd359b59c,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628885552128088825,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-fzcpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d9807aa4-fbf0-4882-8430-19ab67fd1b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 256640a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6687f0f80b60954eaedf2efe71e7de52958317df7dc1ccb6e5443381c4953c9b,PodSandboxId:98aa733c7601bbebf0c032a744f0f9fcdfeb7115c8e2b90b91da9b129ce4e858,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628885519034486969,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1b84144-d4f0-4c62-ac7b-9ed065e6f23c,},Annotations:map[string]string{io.kubernetes.container.hash: e3c63d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0b4babb1a378ff636eec5fba1dcf45135f2aa947c9d764b227996cc05158f0,PodSandboxId:13bf53db9c54ca86ca04b306592f8736552ee1e9f5d83d018a65b6f9a6677076,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885451767324034,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes
.pod.name: packageserver-67fc94bc46-j8jrl,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: ee06352d-ffa1-463f-b2e6-232ca6dbe2dd,},Annotations:map[string]string{io.kubernetes.container.hash: aded265a,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b674ce9e6503ea0f026f4eb28fa58bc26bb22d08f9d4ff6450d5039e6057615a,PodSandboxId:c5aff317bd1767ac51decb45290816ed15ce3c69356efe0e8a9d712e73c58637,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNI
NG,CreatedAt:1628885450837523012,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-67fc94bc46-r9m4b,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 6672f414-2c72-45f1-be32-f615183e971b,},Annotations:map[string]string{io.kubernetes.container.hash: 27693a1c,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87992a088c0223dab9b2a0823e95bdceafbc3c48587395f05f49fef52c7ca9f0,PodSandboxId:3eddc248ebd2babf8a8d6854c734c345d1935b4e9cd964339c742430811d9e75,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]st
ring{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628885449892640125,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-cqbtm,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f61fde8c-0bdf-4e93-a96f-2181f2d62fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7423d1,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93f08adfb4e0babcc9a23276d4e7c2facc57bac633a8425d7b21d1db2752fbb,PodSandboxId:747c4c5377b5cfe0e82b0aff107bb8d5b91c9b1cea132708e034d74274b89fa7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628885436575385240,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7vftf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 139ecada-0447-4ba8-8167-185fe7b13e57,},Annotations:map[string]string{io.kubernetes.container.hash: 7474616,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62d6bef90f6b895dd69fd4482a62667c6f78e467ab62fcc14e28a2ce965e6b56,PodSandboxId:64cbdfd5ccccf64d6ab399f634303d578f339ebf790dc83a5b8c923560e0128d,Metadata:&ContainerMetadata{Name:crea
te,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628885405297328107,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lrs4t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd4f7bc3-556e-4421-bdf7-5a4ddba42249,},Annotations:map[string]string{io.kubernetes.container.hash: 41611cb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb00e59fba031928ae4feff1e5851d749e434f7ec13a746b9878587074a9a372,PodSandboxId:45421eafadb6365f77ca31bc886cf366ac6e8aa09e5c0bd22279e2127fe48ea4,M
etadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885394203618416,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-sbs5r,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70f59bff-f3c5-42c3-9910-5716403e87f0,},Annotations:map[string]string{io.kubernetes.container.hash: c85db323,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:19a00c14da08f1a6b52015584f10099b995a4f111fa6545c31dbdc5f29ba0bd1,PodSandboxId:68e2b3fb3cd572acce3e72f6f991c8abed3c1da1365d22d5c9f907808f9d6944,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885393247502311,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-mwtwb,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 24e09dae-f72b-40a4-80cb-f5ecbbf0ca7f,},Annotations:map[string]string{io.kubernetes.container.hash: 7b60f4bd,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68e4225627dab413969b2614955db17c8b906f1c26a4b4bf668a3a233822145,PodSandboxId:7e9a3a809d3018e4ca0067a1fb1ad0fcd771161b867c75faf6a9e2fdce82b372,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628885383394819197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45bfeb9-65a1-4eba-9a76-fdf6d0d99b64,},Annotations:map[string]string{io.kubernetes.container.hash: 1f79c2dc,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1078e6d14ed625cd64b8a3a86647582e229c20a16d3ac89d2544507b5c85ea0,PodSandboxId:f78d9d73d77a92147abaae5f88570d446bb5d09f6f9179e1671e54361b31a6e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628885365925593862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-gfxc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23bd3629-43d4-4649-b36c-2b3a94e87aa9,},Annotations:map[string]string{io.kubernetes.container.hash: a16fd347,io.kubernetes.container.ports: [{\"n
ame\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d63b71160ddb7d1df0458319f5b52c4dbd2ad5a9acffa6b1fdbd601418bd0b2,PodSandboxId:431924b542fbffa1243b60109d9642811ca28a925170faff0bcbc1eecc5652d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628885364890958243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kdr8f,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 71cbbfcc-fa2b-4052-b8d1-6c8cb701e72f,},Annotations:map[string]string{io.kubernetes.container.hash: de6f9d4d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb303098506cea6f7aaedf8e9ba9ce8bc7225341e8ba95edf52e50c3f0ac820,PodSandboxId:78eb05a2a350b9db310337d95d57a52122a90c95ebef35863ce345f2c09eb145,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628885342367309898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
df0a5d900c5bd0f8ab799d8e76acd48c,},Annotations:map[string]string{io.kubernetes.container.hash: 645db5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa933bbd49bf519154fa65f04978f2b5a3fde3078ac61373541486222ccdfb09,PodSandboxId:85eb3ff7ec77a2353bd03065bd37b1711638241fb77738188cabacfbd552793c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628885342327696058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ae03
eda7d24fd27d1ec6eae9ea90ee,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6482aea484cc8b73dffb03472fc0cd525e15334dec5017662b6eeb69183e72b0,PodSandboxId:87d24b893d3cd7c5e3d2d28a9420f0c06573aac9455b06830f5c5406e7f85d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628885341922998495,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: b4a8f333b7cf937a5265cc85b36be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f5386975c2ba60d55783b65e8928c6ce8eb3df5f1269c454a1966b3ab0b2a34,PodSandboxId:3d8c441b9e85e6a9e3cef6dc4476d00e766eefb8e629b4a98350813d4f2bbb02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628885341516275791,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: fbba398392a43db09d89bcc8d1904990,},Annotations:map[string]string{io.kubernetes.container.hash: 8b548f47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4eddb6ed-4f84-42b4-bcfd-a67feef94f26 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.068961995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4231a271-b54a-48c3-9e47-443b6e5d65d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.069159756Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4231a271-b54a-48c3-9e47-443b6e5d65d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.069569406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a6e56cac6f0ecc658abde4d6754f87b832294e6ed2311c46128b127098cf392,PodSandboxId:0e20abe434cee8ceef97db18a6f7c1e78242c551de0848aa4fbdb50403e9bbb2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628885592421439880,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c4e23a2-c3ff-4a34-a0ac-c6460e9e726c,},Annotations:map[string]string{io.kubernetes.container.hash: 68fa41f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b2e2d11e415c69ded95530eaf8ac12f2f33a681caf846ed1f8832229b871bd,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885582531841557,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.container.hash:
e0838b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f91e2863088d53f8c13a80604ef0ade825100d23ff7b9e07e35c9d525f58cc6,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885582136319196,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.container.hash: 5f6
6b328,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2410665dbb51667ab76511afe66f3cbbe4167d05cb3425c3c841757f4c6bb01,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885581492508684,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.
container.hash: c794ea38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732fcfa3301b11ca797ef1f40584382eedbb8e50c9889b6d906fd04162970153,PodSandboxId:1bff9f733c091126f30ec7339cf04cabbda87d75336de8966386d4fe8d120b46,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628885571047438782,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-xjxw9,io.kubernetes.pod.namespace: defa
ult,io.kubernetes.pod.uid: d4598781-17d6-40dd-8760-b02c4e9c31f7,},Annotations:map[string]string{io.kubernetes.container.hash: edfc9f52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171213d654a90c7bbc4872a7ecb10fac6504cc3d70439127f28f43a26434f4ba,PodSandboxId:e62e1b486d55b570a8a2f7394561524a8b64dde5bdd2b5bd2df1144dd359b59c,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628885552128088825,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-fzcpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d9807aa4-fbf0-4882-8430-19ab67fd1b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 256640a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6687f0f80b60954eaedf2efe71e7de52958317df7dc1ccb6e5443381c4953c9b,PodSandboxId:98aa733c7601bbebf0c032a744f0f9fcdfeb7115c8e2b90b91da9b129ce4e858,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628885519034486969,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1b84144-d4f0-4c62-ac7b-9ed065e6f23c,},Annotations:map[string]string{io.kubernetes.container.hash: e3c63d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0b4babb1a378ff636eec5fba1dcf45135f2aa947c9d764b227996cc05158f0,PodSandboxId:13bf53db9c54ca86ca04b306592f8736552ee1e9f5d83d018a65b6f9a6677076,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885451767324034,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes
.pod.name: packageserver-67fc94bc46-j8jrl,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: ee06352d-ffa1-463f-b2e6-232ca6dbe2dd,},Annotations:map[string]string{io.kubernetes.container.hash: aded265a,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b674ce9e6503ea0f026f4eb28fa58bc26bb22d08f9d4ff6450d5039e6057615a,PodSandboxId:c5aff317bd1767ac51decb45290816ed15ce3c69356efe0e8a9d712e73c58637,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNI
NG,CreatedAt:1628885450837523012,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-67fc94bc46-r9m4b,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 6672f414-2c72-45f1-be32-f615183e971b,},Annotations:map[string]string{io.kubernetes.container.hash: 27693a1c,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87992a088c0223dab9b2a0823e95bdceafbc3c48587395f05f49fef52c7ca9f0,PodSandboxId:3eddc248ebd2babf8a8d6854c734c345d1935b4e9cd964339c742430811d9e75,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]st
ring{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628885449892640125,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-cqbtm,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f61fde8c-0bdf-4e93-a96f-2181f2d62fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7423d1,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93f08adfb4e0babcc9a23276d4e7c2facc57bac633a8425d7b21d1db2752fbb,PodSandboxId:747c4c5377b5cfe0e82b0aff107bb8d5b91c9b1cea132708e034d74274b89fa7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628885436575385240,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7vftf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 139ecada-0447-4ba8-8167-185fe7b13e57,},Annotations:map[string]string{io.kubernetes.container.hash: 7474616,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62d6bef90f6b895dd69fd4482a62667c6f78e467ab62fcc14e28a2ce965e6b56,PodSandboxId:64cbdfd5ccccf64d6ab399f634303d578f339ebf790dc83a5b8c923560e0128d,Metadata:&ContainerMetadata{Name:crea
te,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628885405297328107,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lrs4t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd4f7bc3-556e-4421-bdf7-5a4ddba42249,},Annotations:map[string]string{io.kubernetes.container.hash: 41611cb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb00e59fba031928ae4feff1e5851d749e434f7ec13a746b9878587074a9a372,PodSandboxId:45421eafadb6365f77ca31bc886cf366ac6e8aa09e5c0bd22279e2127fe48ea4,M
etadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885394203618416,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-sbs5r,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70f59bff-f3c5-42c3-9910-5716403e87f0,},Annotations:map[string]string{io.kubernetes.container.hash: c85db323,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:19a00c14da08f1a6b52015584f10099b995a4f111fa6545c31dbdc5f29ba0bd1,PodSandboxId:68e2b3fb3cd572acce3e72f6f991c8abed3c1da1365d22d5c9f907808f9d6944,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885393247502311,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-mwtwb,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 24e09dae-f72b-40a4-80cb-f5ecbbf0ca7f,},Annotations:map[string]string{io.kubernetes.container.hash: 7b60f4bd,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68e4225627dab413969b2614955db17c8b906f1c26a4b4bf668a3a233822145,PodSandboxId:7e9a3a809d3018e4ca0067a1fb1ad0fcd771161b867c75faf6a9e2fdce82b372,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628885383394819197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45bfeb9-65a1-4eba-9a76-fdf6d0d99b64,},Annotations:map[string]string{io.kubernetes.container.hash: 1f79c2dc,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1078e6d14ed625cd64b8a3a86647582e229c20a16d3ac89d2544507b5c85ea0,PodSandboxId:f78d9d73d77a92147abaae5f88570d446bb5d09f6f9179e1671e54361b31a6e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628885365925593862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-gfxc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23bd3629-43d4-4649-b36c-2b3a94e87aa9,},Annotations:map[string]string{io.kubernetes.container.hash: a16fd347,io.kubernetes.container.ports: [{\"n
ame\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d63b71160ddb7d1df0458319f5b52c4dbd2ad5a9acffa6b1fdbd601418bd0b2,PodSandboxId:431924b542fbffa1243b60109d9642811ca28a925170faff0bcbc1eecc5652d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628885364890958243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kdr8f,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 71cbbfcc-fa2b-4052-b8d1-6c8cb701e72f,},Annotations:map[string]string{io.kubernetes.container.hash: de6f9d4d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb303098506cea6f7aaedf8e9ba9ce8bc7225341e8ba95edf52e50c3f0ac820,PodSandboxId:78eb05a2a350b9db310337d95d57a52122a90c95ebef35863ce345f2c09eb145,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628885342367309898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
df0a5d900c5bd0f8ab799d8e76acd48c,},Annotations:map[string]string{io.kubernetes.container.hash: 645db5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa933bbd49bf519154fa65f04978f2b5a3fde3078ac61373541486222ccdfb09,PodSandboxId:85eb3ff7ec77a2353bd03065bd37b1711638241fb77738188cabacfbd552793c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628885342327696058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ae03
eda7d24fd27d1ec6eae9ea90ee,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6482aea484cc8b73dffb03472fc0cd525e15334dec5017662b6eeb69183e72b0,PodSandboxId:87d24b893d3cd7c5e3d2d28a9420f0c06573aac9455b06830f5c5406e7f85d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628885341922998495,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: b4a8f333b7cf937a5265cc85b36be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f5386975c2ba60d55783b65e8928c6ce8eb3df5f1269c454a1966b3ab0b2a34,PodSandboxId:3d8c441b9e85e6a9e3cef6dc4476d00e766eefb8e629b4a98350813d4f2bbb02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628885341516275791,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: fbba398392a43db09d89bcc8d1904990,},Annotations:map[string]string{io.kubernetes.container.hash: 8b548f47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4231a271-b54a-48c3-9e47-443b6e5d65d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.111839801Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=46f73db4-e23f-497c-ae70-617c23b50101 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.112205226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=46f73db4-e23f-497c-ae70-617c23b50101 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.113909110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a6e56cac6f0ecc658abde4d6754f87b832294e6ed2311c46128b127098cf392,PodSandboxId:0e20abe434cee8ceef97db18a6f7c1e78242c551de0848aa4fbdb50403e9bbb2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628885592421439880,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c4e23a2-c3ff-4a34-a0ac-c6460e9e726c,},Annotations:map[string]string{io.kubernetes.container.hash: 68fa41f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b2e2d11e415c69ded95530eaf8ac12f2f33a681caf846ed1f8832229b871bd,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885582531841557,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.container.hash:
e0838b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f91e2863088d53f8c13a80604ef0ade825100d23ff7b9e07e35c9d525f58cc6,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885582136319196,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.container.hash: 5f6
6b328,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2410665dbb51667ab76511afe66f3cbbe4167d05cb3425c3c841757f4c6bb01,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885581492508684,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.
container.hash: c794ea38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732fcfa3301b11ca797ef1f40584382eedbb8e50c9889b6d906fd04162970153,PodSandboxId:1bff9f733c091126f30ec7339cf04cabbda87d75336de8966386d4fe8d120b46,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628885571047438782,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-xjxw9,io.kubernetes.pod.namespace: defa
ult,io.kubernetes.pod.uid: d4598781-17d6-40dd-8760-b02c4e9c31f7,},Annotations:map[string]string{io.kubernetes.container.hash: edfc9f52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171213d654a90c7bbc4872a7ecb10fac6504cc3d70439127f28f43a26434f4ba,PodSandboxId:e62e1b486d55b570a8a2f7394561524a8b64dde5bdd2b5bd2df1144dd359b59c,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628885552128088825,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-fzcpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d9807aa4-fbf0-4882-8430-19ab67fd1b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 256640a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6687f0f80b60954eaedf2efe71e7de52958317df7dc1ccb6e5443381c4953c9b,PodSandboxId:98aa733c7601bbebf0c032a744f0f9fcdfeb7115c8e2b90b91da9b129ce4e858,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628885519034486969,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1b84144-d4f0-4c62-ac7b-9ed065e6f23c,},Annotations:map[string]string{io.kubernetes.container.hash: e3c63d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0b4babb1a378ff636eec5fba1dcf45135f2aa947c9d764b227996cc05158f0,PodSandboxId:13bf53db9c54ca86ca04b306592f8736552ee1e9f5d83d018a65b6f9a6677076,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885451767324034,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes
.pod.name: packageserver-67fc94bc46-j8jrl,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: ee06352d-ffa1-463f-b2e6-232ca6dbe2dd,},Annotations:map[string]string{io.kubernetes.container.hash: aded265a,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b674ce9e6503ea0f026f4eb28fa58bc26bb22d08f9d4ff6450d5039e6057615a,PodSandboxId:c5aff317bd1767ac51decb45290816ed15ce3c69356efe0e8a9d712e73c58637,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNI
NG,CreatedAt:1628885450837523012,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-67fc94bc46-r9m4b,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 6672f414-2c72-45f1-be32-f615183e971b,},Annotations:map[string]string{io.kubernetes.container.hash: 27693a1c,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87992a088c0223dab9b2a0823e95bdceafbc3c48587395f05f49fef52c7ca9f0,PodSandboxId:3eddc248ebd2babf8a8d6854c734c345d1935b4e9cd964339c742430811d9e75,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]st
ring{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628885449892640125,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-cqbtm,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f61fde8c-0bdf-4e93-a96f-2181f2d62fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7423d1,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93f08adfb4e0babcc9a23276d4e7c2facc57bac633a8425d7b21d1db2752fbb,PodSandboxId:747c4c5377b5cfe0e82b0aff107bb8d5b91c9b1cea132708e034d74274b89fa7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628885436575385240,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7vftf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 139ecada-0447-4ba8-8167-185fe7b13e57,},Annotations:map[string]string{io.kubernetes.container.hash: 7474616,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62d6bef90f6b895dd69fd4482a62667c6f78e467ab62fcc14e28a2ce965e6b56,PodSandboxId:64cbdfd5ccccf64d6ab399f634303d578f339ebf790dc83a5b8c923560e0128d,Metadata:&ContainerMetadata{Name:crea
te,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628885405297328107,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lrs4t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd4f7bc3-556e-4421-bdf7-5a4ddba42249,},Annotations:map[string]string{io.kubernetes.container.hash: 41611cb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb00e59fba031928ae4feff1e5851d749e434f7ec13a746b9878587074a9a372,PodSandboxId:45421eafadb6365f77ca31bc886cf366ac6e8aa09e5c0bd22279e2127fe48ea4,M
etadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885394203618416,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-sbs5r,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70f59bff-f3c5-42c3-9910-5716403e87f0,},Annotations:map[string]string{io.kubernetes.container.hash: c85db323,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:19a00c14da08f1a6b52015584f10099b995a4f111fa6545c31dbdc5f29ba0bd1,PodSandboxId:68e2b3fb3cd572acce3e72f6f991c8abed3c1da1365d22d5c9f907808f9d6944,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885393247502311,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-mwtwb,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 24e09dae-f72b-40a4-80cb-f5ecbbf0ca7f,},Annotations:map[string]string{io.kubernetes.container.hash: 7b60f4bd,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68e4225627dab413969b2614955db17c8b906f1c26a4b4bf668a3a233822145,PodSandboxId:7e9a3a809d3018e4ca0067a1fb1ad0fcd771161b867c75faf6a9e2fdce82b372,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628885383394819197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45bfeb9-65a1-4eba-9a76-fdf6d0d99b64,},Annotations:map[string]string{io.kubernetes.container.hash: 1f79c2dc,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1078e6d14ed625cd64b8a3a86647582e229c20a16d3ac89d2544507b5c85ea0,PodSandboxId:f78d9d73d77a92147abaae5f88570d446bb5d09f6f9179e1671e54361b31a6e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628885365925593862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-gfxc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23bd3629-43d4-4649-b36c-2b3a94e87aa9,},Annotations:map[string]string{io.kubernetes.container.hash: a16fd347,io.kubernetes.container.ports: [{\"n
ame\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d63b71160ddb7d1df0458319f5b52c4dbd2ad5a9acffa6b1fdbd601418bd0b2,PodSandboxId:431924b542fbffa1243b60109d9642811ca28a925170faff0bcbc1eecc5652d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628885364890958243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kdr8f,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 71cbbfcc-fa2b-4052-b8d1-6c8cb701e72f,},Annotations:map[string]string{io.kubernetes.container.hash: de6f9d4d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb303098506cea6f7aaedf8e9ba9ce8bc7225341e8ba95edf52e50c3f0ac820,PodSandboxId:78eb05a2a350b9db310337d95d57a52122a90c95ebef35863ce345f2c09eb145,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628885342367309898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
df0a5d900c5bd0f8ab799d8e76acd48c,},Annotations:map[string]string{io.kubernetes.container.hash: 645db5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa933bbd49bf519154fa65f04978f2b5a3fde3078ac61373541486222ccdfb09,PodSandboxId:85eb3ff7ec77a2353bd03065bd37b1711638241fb77738188cabacfbd552793c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628885342327696058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ae03
eda7d24fd27d1ec6eae9ea90ee,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6482aea484cc8b73dffb03472fc0cd525e15334dec5017662b6eeb69183e72b0,PodSandboxId:87d24b893d3cd7c5e3d2d28a9420f0c06573aac9455b06830f5c5406e7f85d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628885341922998495,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: b4a8f333b7cf937a5265cc85b36be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f5386975c2ba60d55783b65e8928c6ce8eb3df5f1269c454a1966b3ab0b2a34,PodSandboxId:3d8c441b9e85e6a9e3cef6dc4476d00e766eefb8e629b4a98350813d4f2bbb02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628885341516275791,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: fbba398392a43db09d89bcc8d1904990,},Annotations:map[string]string{io.kubernetes.container.hash: 8b548f47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=46f73db4-e23f-497c-ae70-617c23b50101 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.151222548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9bd5b092-fa7a-404b-b544-e48f952c4047 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.151441994Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9bd5b092-fa7a-404b-b544-e48f952c4047 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.153326603Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a6e56cac6f0ecc658abde4d6754f87b832294e6ed2311c46128b127098cf392,PodSandboxId:0e20abe434cee8ceef97db18a6f7c1e78242c551de0848aa4fbdb50403e9bbb2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628885592421439880,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c4e23a2-c3ff-4a34-a0ac-c6460e9e726c,},Annotations:map[string]string{io.kubernetes.container.hash: 68fa41f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b2e2d11e415c69ded95530eaf8ac12f2f33a681caf846ed1f8832229b871bd,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885582531841557,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.container.hash:
e0838b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f91e2863088d53f8c13a80604ef0ade825100d23ff7b9e07e35c9d525f58cc6,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885582136319196,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.container.hash: 5f6
6b328,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2410665dbb51667ab76511afe66f3cbbe4167d05cb3425c3c841757f4c6bb01,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885581492508684,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.
container.hash: c794ea38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732fcfa3301b11ca797ef1f40584382eedbb8e50c9889b6d906fd04162970153,PodSandboxId:1bff9f733c091126f30ec7339cf04cabbda87d75336de8966386d4fe8d120b46,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628885571047438782,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-xjxw9,io.kubernetes.pod.namespace: defa
ult,io.kubernetes.pod.uid: d4598781-17d6-40dd-8760-b02c4e9c31f7,},Annotations:map[string]string{io.kubernetes.container.hash: edfc9f52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171213d654a90c7bbc4872a7ecb10fac6504cc3d70439127f28f43a26434f4ba,PodSandboxId:e62e1b486d55b570a8a2f7394561524a8b64dde5bdd2b5bd2df1144dd359b59c,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628885552128088825,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-fzcpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d9807aa4-fbf0-4882-8430-19ab67fd1b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 256640a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6687f0f80b60954eaedf2efe71e7de52958317df7dc1ccb6e5443381c4953c9b,PodSandboxId:98aa733c7601bbebf0c032a744f0f9fcdfeb7115c8e2b90b91da9b129ce4e858,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628885519034486969,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1b84144-d4f0-4c62-ac7b-9ed065e6f23c,},Annotations:map[string]string{io.kubernetes.container.hash: e3c63d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0b4babb1a378ff636eec5fba1dcf45135f2aa947c9d764b227996cc05158f0,PodSandboxId:13bf53db9c54ca86ca04b306592f8736552ee1e9f5d83d018a65b6f9a6677076,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885451767324034,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes
.pod.name: packageserver-67fc94bc46-j8jrl,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: ee06352d-ffa1-463f-b2e6-232ca6dbe2dd,},Annotations:map[string]string{io.kubernetes.container.hash: aded265a,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b674ce9e6503ea0f026f4eb28fa58bc26bb22d08f9d4ff6450d5039e6057615a,PodSandboxId:c5aff317bd1767ac51decb45290816ed15ce3c69356efe0e8a9d712e73c58637,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNI
NG,CreatedAt:1628885450837523012,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-67fc94bc46-r9m4b,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 6672f414-2c72-45f1-be32-f615183e971b,},Annotations:map[string]string{io.kubernetes.container.hash: 27693a1c,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87992a088c0223dab9b2a0823e95bdceafbc3c48587395f05f49fef52c7ca9f0,PodSandboxId:3eddc248ebd2babf8a8d6854c734c345d1935b4e9cd964339c742430811d9e75,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]st
ring{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628885449892640125,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-cqbtm,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f61fde8c-0bdf-4e93-a96f-2181f2d62fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7423d1,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93f08adfb4e0babcc9a23276d4e7c2facc57bac633a8425d7b21d1db2752fbb,PodSandboxId:747c4c5377b5cfe0e82b0aff107bb8d5b91c9b1cea132708e034d74274b89fa7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628885436575385240,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7vftf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 139ecada-0447-4ba8-8167-185fe7b13e57,},Annotations:map[string]string{io.kubernetes.container.hash: 7474616,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62d6bef90f6b895dd69fd4482a62667c6f78e467ab62fcc14e28a2ce965e6b56,PodSandboxId:64cbdfd5ccccf64d6ab399f634303d578f339ebf790dc83a5b8c923560e0128d,Metadata:&ContainerMetadata{Name:crea
te,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628885405297328107,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lrs4t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd4f7bc3-556e-4421-bdf7-5a4ddba42249,},Annotations:map[string]string{io.kubernetes.container.hash: 41611cb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb00e59fba031928ae4feff1e5851d749e434f7ec13a746b9878587074a9a372,PodSandboxId:45421eafadb6365f77ca31bc886cf366ac6e8aa09e5c0bd22279e2127fe48ea4,M
etadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885394203618416,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-sbs5r,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70f59bff-f3c5-42c3-9910-5716403e87f0,},Annotations:map[string]string{io.kubernetes.container.hash: c85db323,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:19a00c14da08f1a6b52015584f10099b995a4f111fa6545c31dbdc5f29ba0bd1,PodSandboxId:68e2b3fb3cd572acce3e72f6f991c8abed3c1da1365d22d5c9f907808f9d6944,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885393247502311,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-mwtwb,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 24e09dae-f72b-40a4-80cb-f5ecbbf0ca7f,},Annotations:map[string]string{io.kubernetes.container.hash: 7b60f4bd,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68e4225627dab413969b2614955db17c8b906f1c26a4b4bf668a3a233822145,PodSandboxId:7e9a3a809d3018e4ca0067a1fb1ad0fcd771161b867c75faf6a9e2fdce82b372,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628885383394819197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45bfeb9-65a1-4eba-9a76-fdf6d0d99b64,},Annotations:map[string]string{io.kubernetes.container.hash: 1f79c2dc,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1078e6d14ed625cd64b8a3a86647582e229c20a16d3ac89d2544507b5c85ea0,PodSandboxId:f78d9d73d77a92147abaae5f88570d446bb5d09f6f9179e1671e54361b31a6e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628885365925593862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-gfxc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23bd3629-43d4-4649-b36c-2b3a94e87aa9,},Annotations:map[string]string{io.kubernetes.container.hash: a16fd347,io.kubernetes.container.ports: [{\"n
ame\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d63b71160ddb7d1df0458319f5b52c4dbd2ad5a9acffa6b1fdbd601418bd0b2,PodSandboxId:431924b542fbffa1243b60109d9642811ca28a925170faff0bcbc1eecc5652d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628885364890958243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kdr8f,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 71cbbfcc-fa2b-4052-b8d1-6c8cb701e72f,},Annotations:map[string]string{io.kubernetes.container.hash: de6f9d4d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb303098506cea6f7aaedf8e9ba9ce8bc7225341e8ba95edf52e50c3f0ac820,PodSandboxId:78eb05a2a350b9db310337d95d57a52122a90c95ebef35863ce345f2c09eb145,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628885342367309898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
df0a5d900c5bd0f8ab799d8e76acd48c,},Annotations:map[string]string{io.kubernetes.container.hash: 645db5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa933bbd49bf519154fa65f04978f2b5a3fde3078ac61373541486222ccdfb09,PodSandboxId:85eb3ff7ec77a2353bd03065bd37b1711638241fb77738188cabacfbd552793c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628885342327696058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ae03
eda7d24fd27d1ec6eae9ea90ee,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6482aea484cc8b73dffb03472fc0cd525e15334dec5017662b6eeb69183e72b0,PodSandboxId:87d24b893d3cd7c5e3d2d28a9420f0c06573aac9455b06830f5c5406e7f85d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628885341922998495,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: b4a8f333b7cf937a5265cc85b36be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f5386975c2ba60d55783b65e8928c6ce8eb3df5f1269c454a1966b3ab0b2a34,PodSandboxId:3d8c441b9e85e6a9e3cef6dc4476d00e766eefb8e629b4a98350813d4f2bbb02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628885341516275791,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: fbba398392a43db09d89bcc8d1904990,},Annotations:map[string]string{io.kubernetes.container.hash: 8b548f47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9bd5b092-fa7a-404b-b544-e48f952c4047 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.205402252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=badb2848-5413-4785-9eef-ba0503eb2654 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.205546028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=badb2848-5413-4785-9eef-ba0503eb2654 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:17:05 addons-20210813200811-30853 crio[2076]: time="2021-08-13 20:17:05.206064284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a6e56cac6f0ecc658abde4d6754f87b832294e6ed2311c46128b127098cf392,PodSandboxId:0e20abe434cee8ceef97db18a6f7c1e78242c551de0848aa4fbdb50403e9bbb2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628885592421439880,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c4e23a2-c3ff-4a34-a0ac-c6460e9e726c,},Annotations:map[string]string{io.kubernetes.container.hash: 68fa41f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b2e2d11e415c69ded95530eaf8ac12f2f33a681caf846ed1f8832229b871bd,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885582531841557,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.container.hash:
e0838b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f91e2863088d53f8c13a80604ef0ade825100d23ff7b9e07e35c9d525f58cc6,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885582136319196,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.container.hash: 5f6
6b328,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2410665dbb51667ab76511afe66f3cbbe4167d05cb3425c3c841757f4c6bb01,PodSandboxId:746710b5fffdee0e1a081857295319add92f2872cf749270dc6c9cfefe21390c,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628885581492508684,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-b97bk,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 915fed84-4ba9-4ec5-a260-73f5beb71079,},Annotations:map[string]string{io.kubernetes.
container.hash: c794ea38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732fcfa3301b11ca797ef1f40584382eedbb8e50c9889b6d906fd04162970153,PodSandboxId:1bff9f733c091126f30ec7339cf04cabbda87d75336de8966386d4fe8d120b46,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628885571047438782,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-xjxw9,io.kubernetes.pod.namespace: defa
ult,io.kubernetes.pod.uid: d4598781-17d6-40dd-8760-b02c4e9c31f7,},Annotations:map[string]string{io.kubernetes.container.hash: edfc9f52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171213d654a90c7bbc4872a7ecb10fac6504cc3d70439127f28f43a26434f4ba,PodSandboxId:e62e1b486d55b570a8a2f7394561524a8b64dde5bdd2b5bd2df1144dd359b59c,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628885552128088825,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-fzcpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d9807aa4-fbf0-4882-8430-19ab67fd1b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 256640a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6687f0f80b60954eaedf2efe71e7de52958317df7dc1ccb6e5443381c4953c9b,PodSandboxId:98aa733c7601bbebf0c032a744f0f9fcdfeb7115c8e2b90b91da9b129ce4e858,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628885519034486969,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1b84144-d4f0-4c62-ac7b-9ed065e6f23c,},Annotations:map[string]string{io.kubernetes.container.hash: e3c63d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0b4babb1a378ff636eec5fba1dcf45135f2aa947c9d764b227996cc05158f0,PodSandboxId:13bf53db9c54ca86ca04b306592f8736552ee1e9f5d83d018a65b6f9a6677076,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885451767324034,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes
.pod.name: packageserver-67fc94bc46-j8jrl,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: ee06352d-ffa1-463f-b2e6-232ca6dbe2dd,},Annotations:map[string]string{io.kubernetes.container.hash: aded265a,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b674ce9e6503ea0f026f4eb28fa58bc26bb22d08f9d4ff6450d5039e6057615a,PodSandboxId:c5aff317bd1767ac51decb45290816ed15ce3c69356efe0e8a9d712e73c58637,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNI
NG,CreatedAt:1628885450837523012,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-67fc94bc46-r9m4b,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 6672f414-2c72-45f1-be32-f615183e971b,},Annotations:map[string]string{io.kubernetes.container.hash: 27693a1c,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87992a088c0223dab9b2a0823e95bdceafbc3c48587395f05f49fef52c7ca9f0,PodSandboxId:3eddc248ebd2babf8a8d6854c734c345d1935b4e9cd964339c742430811d9e75,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]st
ring{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628885449892640125,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-cqbtm,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f61fde8c-0bdf-4e93-a96f-2181f2d62fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7423d1,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93f08adfb4e0babcc9a23276d4e7c2facc57bac633a8425d7b21d1db2752fbb,PodSandboxId:747c4c5377b5cfe0e82b0aff107bb8d5b91c9b1cea132708e034d74274b89fa7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628885436575385240,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7vftf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 139ecada-0447-4ba8-8167-185fe7b13e57,},Annotations:map[string]string{io.kubernetes.container.hash: 7474616,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62d6bef90f6b895dd69fd4482a62667c6f78e467ab62fcc14e28a2ce965e6b56,PodSandboxId:64cbdfd5ccccf64d6ab399f634303d578f339ebf790dc83a5b8c923560e0128d,Metadata:&ContainerMetadata{Name:crea
te,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628885405297328107,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lrs4t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd4f7bc3-556e-4421-bdf7-5a4ddba42249,},Annotations:map[string]string{io.kubernetes.container.hash: 41611cb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb00e59fba031928ae4feff1e5851d749e434f7ec13a746b9878587074a9a372,PodSandboxId:45421eafadb6365f77ca31bc886cf366ac6e8aa09e5c0bd22279e2127fe48ea4,M
etadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885394203618416,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-sbs5r,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70f59bff-f3c5-42c3-9910-5716403e87f0,},Annotations:map[string]string{io.kubernetes.container.hash: c85db323,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:19a00c14da08f1a6b52015584f10099b995a4f111fa6545c31dbdc5f29ba0bd1,PodSandboxId:68e2b3fb3cd572acce3e72f6f991c8abed3c1da1365d22d5c9f907808f9d6944,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628885393247502311,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-mwtwb,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 24e09dae-f72b-40a4-80cb-f5ecbbf0ca7f,},Annotations:map[string]string{io.kubernetes.container.hash: 7b60f4bd,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68e4225627dab413969b2614955db17c8b906f1c26a4b4bf668a3a233822145,PodSandboxId:7e9a3a809d3018e4ca0067a1fb1ad0fcd771161b867c75faf6a9e2fdce82b372,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628885383394819197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45bfeb9-65a1-4eba-9a76-fdf6d0d99b64,},Annotations:map[string]string{io.kubernetes.container.hash: 1f79c2dc,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1078e6d14ed625cd64b8a3a86647582e229c20a16d3ac89d2544507b5c85ea0,PodSandboxId:f78d9d73d77a92147abaae5f88570d446bb5d09f6f9179e1671e54361b31a6e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628885365925593862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-gfxc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23bd3629-43d4-4649-b36c-2b3a94e87aa9,},Annotations:map[string]string{io.kubernetes.container.hash: a16fd347,io.kubernetes.container.ports: [{\"n
ame\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d63b71160ddb7d1df0458319f5b52c4dbd2ad5a9acffa6b1fdbd601418bd0b2,PodSandboxId:431924b542fbffa1243b60109d9642811ca28a925170faff0bcbc1eecc5652d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628885364890958243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kdr8f,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 71cbbfcc-fa2b-4052-b8d1-6c8cb701e72f,},Annotations:map[string]string{io.kubernetes.container.hash: de6f9d4d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb303098506cea6f7aaedf8e9ba9ce8bc7225341e8ba95edf52e50c3f0ac820,PodSandboxId:78eb05a2a350b9db310337d95d57a52122a90c95ebef35863ce345f2c09eb145,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628885342367309898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
df0a5d900c5bd0f8ab799d8e76acd48c,},Annotations:map[string]string{io.kubernetes.container.hash: 645db5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa933bbd49bf519154fa65f04978f2b5a3fde3078ac61373541486222ccdfb09,PodSandboxId:85eb3ff7ec77a2353bd03065bd37b1711638241fb77738188cabacfbd552793c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628885342327696058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ae03
eda7d24fd27d1ec6eae9ea90ee,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6482aea484cc8b73dffb03472fc0cd525e15334dec5017662b6eeb69183e72b0,PodSandboxId:87d24b893d3cd7c5e3d2d28a9420f0c06573aac9455b06830f5c5406e7f85d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628885341922998495,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: b4a8f333b7cf937a5265cc85b36be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f5386975c2ba60d55783b65e8928c6ce8eb3df5f1269c454a1966b3ab0b2a34,PodSandboxId:3d8c441b9e85e6a9e3cef6dc4476d00e766eefb8e629b4a98350813d4f2bbb02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628885341516275791,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210813200811-30853,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: fbba398392a43db09d89bcc8d1904990,},Annotations:map[string]string{io.kubernetes.container.hash: 8b548f47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=badb2848-5413-4785-9eef-ba0503eb2654 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	8a6e56cac6f0e       docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce                                                 3 minutes ago       Running             nginx                     0                   0e20abe434cee
	a1b2e2d11e415       9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5                                                                                4 minutes ago       Running             etcd-restore-operator     0                   746710b5fffde
	6f91e2863088d       9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5                                                                                4 minutes ago       Running             etcd-backup-operator      0                   746710b5fffde
	d2410665dbb51       quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b                                            4 minutes ago       Running             etcd-operator             0                   746710b5fffde
	732fcfa3301b1       europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8   4 minutes ago       Running             private-image-eu          0                   1bff9f733c091
	171213d654a90       us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8                4 minutes ago       Running             private-image             0                   e62e1b486d55b
	6687f0f80b609       docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                               5 minutes ago       Running             busybox                   0                   98aa733c7601b
	ac0b4babb1a37       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             packageserver             0                   13bf53db9c54c
	b674ce9e6503e       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             packageserver             0                   c5aff317bd176
	87992a088c022       quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0                 6 minutes ago       Running             registry-server           0                   3eddc248ebd2b
	e93f08adfb4e0       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6                                  6 minutes ago       Exited              patch                     0                   747c4c5377b5c
	62d6bef90f6b8       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6                                  7 minutes ago       Exited              create                    0                   64cbdfd5ccccf
	fb00e59fba031       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          7 minutes ago       Running             catalog-operator          0                   45421eafadb63
	19a00c14da08f       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          7 minutes ago       Running             olm-operator              0                   68e2b3fb3cd57
	f68e4225627da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                                7 minutes ago       Running             storage-provisioner       0                   7e9a3a809d301
	b1078e6d14ed6       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                                                                7 minutes ago       Running             coredns                   0                   f78d9d73d77a9
	1d63b71160ddb       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                                                                7 minutes ago       Running             kube-proxy                0                   431924b542fbf
	9cb303098506c       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                                                                8 minutes ago       Running             etcd                      0                   78eb05a2a350b
	fa933bbd49bf5       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                                                                8 minutes ago       Running             kube-scheduler            0                   85eb3ff7ec77a
	6482aea484cc8       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                                                                8 minutes ago       Running             kube-controller-manager   0                   87d24b893d3cd
	2f5386975c2ba       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                                                                8 minutes ago       Running             kube-apiserver            0                   3d8c441b9e85e
	
	* 
	* ==> coredns [b1078e6d14ed625cd64b8a3a86647582e229c20a16d3ac89d2544507b5c85ea0] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210813200811-30853
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20210813200811-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=addons-20210813200811-30853
	                    minikube.k8s.io/updated_at=2021_08_13T20_09_11_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210813200811-30853
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:09:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210813200811-30853
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:16:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:13:17 +0000   Fri, 13 Aug 2021 20:09:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:13:17 +0000   Fri, 13 Aug 2021 20:09:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:13:17 +0000   Fri, 13 Aug 2021 20:09:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:13:17 +0000   Fri, 13 Aug 2021 20:09:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    addons-20210813200811-30853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3935040Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3935040Ki
	  pods:               110
	System Info:
	  Machine ID:                 0834b5f4327a4e7993d81b4e13b999ae
	  System UUID:                0834b5f4-327a-4e79-93d8-1b4e13b999ae
	  Boot ID:                    0be9a26e-686d-4147-bf92-ff289761fda4
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  default                     private-image-7ff9c8c74f-fzcpx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  default                     private-image-eu-5956d58f9f-xjxw9                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 coredns-558bd4d5db-gfxc4                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m42s
	  kube-system                 etcd-addons-20210813200811-30853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m49s
	  kube-system                 kube-apiserver-addons-20210813200811-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	  kube-system                 kube-controller-manager-addons-20210813200811-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 kube-proxy-kdr8f                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m42s
	  kube-system                 kube-scheduler-addons-20210813200811-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  my-etcd                     etcd-operator-85cd4f54cd-b97bk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  olm                         catalog-operator-75d496484d-sbs5r                      10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (2%!)(MISSING)        0 (0%!)(MISSING)         7m31s
	  olm                         olm-operator-859c88c96-mwtwb                           10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m31s
	  olm                         operatorhubio-catalog-cqbtm                            10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (1%!)(MISSING)        0 (0%!)(MISSING)         7m11s
	  olm                         packageserver-67fc94bc46-j8jrl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  olm                         packageserver-67fc94bc46-r9m4b                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                780m (39%!)(MISSING)   0 (0%!)(MISSING)
	  memory             460Mi (11%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  8m6s (x6 over 8m6s)  kubelet     Node addons-20210813200811-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m6s (x5 over 8m6s)  kubelet     Node addons-20210813200811-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m6s (x5 over 8m6s)  kubelet     Node addons-20210813200811-30853 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m49s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m49s                kubelet     Node addons-20210813200811-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m49s                kubelet     Node addons-20210813200811-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m49s                kubelet     Node addons-20210813200811-30853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m49s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m42s                kubelet     Node addons-20210813200811-30853 status is now: NodeReady
	  Normal  Starting                 7m40s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +6.201914] kauditd_printk_skb: 14 callbacks suppressed
	[ +13.669149] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.226266] NFSD: Unable to end grace period: -110
	[  +6.573006] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.661256] kauditd_printk_skb: 32 callbacks suppressed
	[  +9.552489] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.110413] kauditd_printk_skb: 14 callbacks suppressed
	[Aug13 20:11] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.110942] kauditd_printk_skb: 2 callbacks suppressed
	[ +16.097122] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.032975] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.178933] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.971945] kauditd_printk_skb: 11 callbacks suppressed
	[Aug13 20:12] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.530920] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.831131] kauditd_printk_skb: 77 callbacks suppressed
	[  +8.842590] kauditd_printk_skb: 26 callbacks suppressed
	[ +10.099462] kauditd_printk_skb: 5 callbacks suppressed
	[ +14.223376] kauditd_printk_skb: 44 callbacks suppressed
	[Aug13 20:13] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.012802] kauditd_printk_skb: 83 callbacks suppressed
	[ +12.492514] kauditd_printk_skb: 20 callbacks suppressed
	[  +7.359744] kauditd_printk_skb: 122 callbacks suppressed
	[Aug13 20:16] kauditd_printk_skb: 38 callbacks suppressed
	[ +11.346816] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [6f91e2863088d53f8c13a80604ef0ade825100d23ff7b9e07e35c9d525f58cc6] <==
	* time="2021-08-13T20:13:02Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-13T20:13:02Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-13T20:13:02Z" level=info msg="etcd-backup-operator Version: 0.9.4"
	time="2021-08-13T20:13:02Z" level=info msg="Git SHA: c8a1c64"
	E0813 20:13:02.287695       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-backup-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"fd9f7efc-15a0-4ff8-acc6-716db27201ec", ResourceVersion:"1911", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764482382, loc:(*time.Location)(0x25824c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-b97bk\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-13T20:13:02Z\",\"renewTime\":\"2021-08-13T20:13:02Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Wil
l not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-b97bk became leader'
	time="2021-08-13T20:13:02Z" level=info msg="starting backup controller" pkg=controller
	
	* 
	* ==> etcd [9cb303098506cea6f7aaedf8e9ba9ce8bc7225341e8ba95edf52e50c3f0ac820] <==
	* 2021-08-13 20:13:02.540337 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:13:12.528934 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:13:22.528455 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:13:32.527414 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:13:42.527854 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:13:52.528712 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:02.527557 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:12.527367 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:22.527915 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:32.528257 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:42.528066 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:52.528524 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:02.527529 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:12.527995 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:22.528479 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:32.526691 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:42.527451 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:52.526933 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:02.528423 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:12.527665 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:22.528442 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:32.527914 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:42.527014 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:52.527573 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:17:02.528289 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [a1b2e2d11e415c69ded95530eaf8ac12f2f33a681caf846ed1f8832229b871bd] <==
	* time="2021-08-13T20:13:03Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-13T20:13:03Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-13T20:13:03Z" level=info msg="etcd-restore-operator Version: 0.9.4"
	time="2021-08-13T20:13:03Z" level=info msg="Git SHA: c8a1c64"
	E0813 20:13:03.649710       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-restore-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"e34d3eff-e3a2-41fb-82be-bad120453150", ResourceVersion:"1937", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764482383, loc:(*time.Location)(0x24e11a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"etcd-operator-alm-owned"}, Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-b97bk\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-13T20:13:03Z\",\"renewTime\":\"2021-08-13T20:13:03Z\",\"leaderTransitions\":1}", "endpoints.kubernetes.io/last-change-trigger-time":"2021-08-13T20:13:03Z"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), Cl
usterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-b97bk became leader'
	time="2021-08-13T20:13:04Z" level=info msg="listening on 0.0.0.0:19999"
	time="2021-08-13T20:13:04Z" level=info msg="starting restore controller" pkg=controller
	
	* 
	* ==> etcd [d2410665dbb51667ab76511afe66f3cbbe4167d05cb3425c3c841757f4c6bb01] <==
	* time="2021-08-13T20:13:01Z" level=info msg="etcd-operator Version: 0.9.4"
	time="2021-08-13T20:13:01Z" level=info msg="Git SHA: c8a1c64"
	time="2021-08-13T20:13:01Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-13T20:13:01Z" level=info msg="Go OS/Arch: linux/amd64"
	E0813 20:13:01.798637       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"6dc4d6e6-d316-4cf6-ba9d-3315ac5132c9", ResourceVersion:"1905", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764482381, loc:(*time.Location)(0x20d4640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-b97bk\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-13T20:13:01Z\",\"renewTime\":\"2021-08-13T20:13:01Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not r
eport event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-b97bk became leader'
	
	* 
	* ==> kernel <==
	*  20:17:05 up 8 min,  0 users,  load average: 2.13, 2.70, 1.70
	Linux addons-20210813200811-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [2f5386975c2ba60d55783b65e8928c6ce8eb3df5f1269c454a1966b3ab0b2a34] <==
	* I0813 20:13:07.660621       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:13:07.660675       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:13:07.660687       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0813 20:13:25.817702       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	W0813 20:13:25.903693       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	W0813 20:13:26.182065       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	I0813 20:13:38.795578       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:13:38.795708       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:13:38.795719       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:14:22.473990       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:14:22.474059       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:14:22.474078       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:14:55.143412       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:14:55.143552       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:14:55.143566       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:15:30.331642       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:15:30.331931       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:15:30.331980       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:16:13.038784       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:16:13.038971       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:16:13.038990       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0813 20:16:42.090294       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I0813 20:16:44.183881       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:16:44.184026       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:16:44.184039       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [6482aea484cc8b73dffb03472fc0cd525e15334dec5017662b6eeb69183e72b0] <==
	* E0813 20:13:33.507850       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:13:34.861524       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:13:40.123368       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:13:42.343094       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:13:43.577629       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0813 20:13:54.220185       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0813 20:13:54.220413       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:13:54.619349       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0813 20:13:54.619461       1 shared_informer.go:247] Caches are synced for garbage collector 
	E0813 20:13:56.590242       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:14:04.450408       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:14:05.397318       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:14:23.878722       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:14:39.641386       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:14:40.186876       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:15:04.558664       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:15:23.308351       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:15:28.872546       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:15:34.905481       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:15:57.040267       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:16:05.374521       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:16:22.596313       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:16:40.504364       1 tokens_controller.go:262] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-m8m47" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	E0813 20:16:47.972930       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:16:50.421617       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [1d63b71160ddb7d1df0458319f5b52c4dbd2ad5a9acffa6b1fdbd601418bd0b2] <==
	* I0813 20:09:25.238732       1 node.go:172] Successfully retrieved node IP: 192.168.39.144
	I0813 20:09:25.238955       1 server_others.go:140] Detected node IP 192.168.39.144
	W0813 20:09:25.239762       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:09:25.477057       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:09:25.484245       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:09:25.484268       1 server_others.go:212] Using iptables Proxier.
	I0813 20:09:25.484574       1 server.go:643] Version: v1.21.3
	I0813 20:09:25.504093       1 config.go:315] Starting service config controller
	I0813 20:09:25.511733       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:09:25.511686       1 config.go:224] Starting endpoint slice config controller
	I0813 20:09:25.512072       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:09:25.526342       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:09:25.531042       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:09:25.612778       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:09:25.613919       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:11:00.274331       1 trace.go:205] Trace[1941469767]: "iptables restore" (13-Aug-2021 20:10:58.264) (total time: 2009ms):
	Trace[1941469767]: [2.009790287s] [2.009790287s] END
	
	* 
	* ==> kube-scheduler [fa933bbd49bf519154fa65f04978f2b5a3fde3078ac61373541486222ccdfb09] <==
	* I0813 20:09:07.055839       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:09:07.058225       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:09:07.058357       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:09:07.059485       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:07.059591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:07.059670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:09:07.059737       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:09:07.060190       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:09:07.060885       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:09:07.061038       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:09:07.061332       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:09:07.061553       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:09:07.061996       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:07.062031       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:07.063517       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:09:07.904046       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:09:07.908343       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:09:07.918096       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:09:08.066375       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:09:08.069690       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:09:08.091031       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:08.369500       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:09:08.390669       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:09:08.603699       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0813 20:09:11.256341       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:08:23 UTC, end at Fri 2021-08-13 20:17:05 UTC. --
	Aug 13 20:13:29 addons-20210813200811-30853 kubelet[2807]: I0813 20:13:29.451831    2807 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdqjv\" (UniqueName: \"kubernetes.io/projected/5f5fcf8f-f47d-4003-90db-51cd367870ec-kube-api-access-mdqjv\") pod \"5f5fcf8f-f47d-4003-90db-51cd367870ec\" (UID: \"5f5fcf8f-f47d-4003-90db-51cd367870ec\") "
	Aug 13 20:13:29 addons-20210813200811-30853 kubelet[2807]: I0813 20:13:29.468346    2807 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5fcf8f-f47d-4003-90db-51cd367870ec-kube-api-access-mdqjv" (OuterVolumeSpecName: "kube-api-access-mdqjv") pod "5f5fcf8f-f47d-4003-90db-51cd367870ec" (UID: "5f5fcf8f-f47d-4003-90db-51cd367870ec"). InnerVolumeSpecName "kube-api-access-mdqjv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 20:13:29 addons-20210813200811-30853 kubelet[2807]: I0813 20:13:29.553014    2807 reconciler.go:319] "Volume detached for volume \"kube-api-access-mdqjv\" (UniqueName: \"kubernetes.io/projected/5f5fcf8f-f47d-4003-90db-51cd367870ec-kube-api-access-mdqjv\") on node \"addons-20210813200811-30853\" DevicePath \"\""
	Aug 13 20:13:36 addons-20210813200811-30853 kubelet[2807]: I0813 20:13:36.603684    2807 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-fzcpx" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:14:00 addons-20210813200811-30853 kubelet[2807]: I0813 20:14:00.601898    2807 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-xjxw9" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:14:16 addons-20210813200811-30853 kubelet[2807]: I0813 20:14:16.604805    2807 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:14:28 addons-20210813200811-30853 kubelet[2807]: I0813 20:14:28.602203    2807 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:14:57 addons-20210813200811-30853 kubelet[2807]: I0813 20:14:57.602612    2807 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-fzcpx" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:15:25 addons-20210813200811-30853 kubelet[2807]: I0813 20:15:25.602059    2807 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-xjxw9" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:15:41 addons-20210813200811-30853 kubelet[2807]: I0813 20:15:41.602739    2807 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:15:41 addons-20210813200811-30853 kubelet[2807]: I0813 20:15:41.603424    2807 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:16:11 addons-20210813200811-30853 kubelet[2807]: I0813 20:16:11.602354    2807 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-fzcpx" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:16:35 addons-20210813200811-30853 kubelet[2807]: E0813 20:16:35.727403    2807 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-rbdkd.169af7004153ede2", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-rbdkd", UID:"4329023d-6983-43c8-be2e-0ebec562c9b1", APIVersion:"v1", ResourceVersion:"641", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210813200811-30853"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03dd2a8eaac4fe2, ext:444930907882, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03dd2a8eaac4fe2, ext:444930907882, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-rbdkd.169af7004153ede2" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 13 20:16:37 addons-20210813200811-30853 kubelet[2807]: I0813 20:16:37.602434    2807 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-xjxw9" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:16:41 addons-20210813200811-30853 kubelet[2807]: E0813 20:16:41.510008    2807 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-rbdkd.169af7019a4d0151", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-rbdkd", UID:"4329023d-6983-43c8-be2e-0ebec562c9b1", APIVersion:"v1", ResourceVersion:"641", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Read
iness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210813200811-30853"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03dd2aa5e04a751, ext:450718593760, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03dd2aa5e04a751, ext:450718593760, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-rbdkd.169af7019a4d0151" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 13 20:16:41 addons-20210813200811-30853 kubelet[2807]: E0813 20:16:41.512997    2807 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-rbdkd.169af7019a5f771c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-rbdkd", UID:"4329023d-6983-43c8-be2e-0ebec562c9b1", APIVersion:"v1", ResourceVersion:"641", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Live
ness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210813200811-30853"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03dd2aa5e171d1c, ext:450719803283, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03dd2aa5e171d1c, ext:450719803283, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-rbdkd.169af7019a5f771c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 13 20:16:47 addons-20210813200811-30853 kubelet[2807]: I0813 20:16:47.395850    2807 scope.go:111] "RemoveContainer" containerID="94c9aac2f110c996c313939b9e1e9f8f00a44bc14926d6e8e32bc48c4403b1bb"
	Aug 13 20:16:48 addons-20210813200811-30853 kubelet[2807]: I0813 20:16:48.564444    2807 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88ksp\" (UniqueName: \"kubernetes.io/projected/4329023d-6983-43c8-be2e-0ebec562c9b1-kube-api-access-88ksp\") pod \"4329023d-6983-43c8-be2e-0ebec562c9b1\" (UID: \"4329023d-6983-43c8-be2e-0ebec562c9b1\") "
	Aug 13 20:16:48 addons-20210813200811-30853 kubelet[2807]: I0813 20:16:48.565394    2807 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4329023d-6983-43c8-be2e-0ebec562c9b1-webhook-cert\") pod \"4329023d-6983-43c8-be2e-0ebec562c9b1\" (UID: \"4329023d-6983-43c8-be2e-0ebec562c9b1\") "
	Aug 13 20:16:48 addons-20210813200811-30853 kubelet[2807]: I0813 20:16:48.581994    2807 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4329023d-6983-43c8-be2e-0ebec562c9b1-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4329023d-6983-43c8-be2e-0ebec562c9b1" (UID: "4329023d-6983-43c8-be2e-0ebec562c9b1"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 13 20:16:48 addons-20210813200811-30853 kubelet[2807]: I0813 20:16:48.582285    2807 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4329023d-6983-43c8-be2e-0ebec562c9b1-kube-api-access-88ksp" (OuterVolumeSpecName: "kube-api-access-88ksp") pod "4329023d-6983-43c8-be2e-0ebec562c9b1" (UID: "4329023d-6983-43c8-be2e-0ebec562c9b1"). InnerVolumeSpecName "kube-api-access-88ksp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 20:16:48 addons-20210813200811-30853 kubelet[2807]: I0813 20:16:48.602088    2807 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:16:48 addons-20210813200811-30853 kubelet[2807]: I0813 20:16:48.666075    2807 reconciler.go:319] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4329023d-6983-43c8-be2e-0ebec562c9b1-webhook-cert\") on node \"addons-20210813200811-30853\" DevicePath \"\""
	Aug 13 20:16:48 addons-20210813200811-30853 kubelet[2807]: I0813 20:16:48.666245    2807 reconciler.go:319] "Volume detached for volume \"kube-api-access-88ksp\" (UniqueName: \"kubernetes.io/projected/4329023d-6983-43c8-be2e-0ebec562c9b1-kube-api-access-88ksp\") on node \"addons-20210813200811-30853\" DevicePath \"\""
	Aug 13 20:17:01 addons-20210813200811-30853 kubelet[2807]: I0813 20:17:01.602823    2807 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
	
	* 
	* ==> storage-provisioner [f68e4225627dab413969b2614955db17c8b906f1c26a4b4bf668a3a233822145] <==
	* I0813 20:09:43.636364       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:09:43.714495       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:09:43.714619       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:09:43.735604       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:09:43.736853       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210813200811-30853_bb29066b-ba02-4727-889d-f5ff4246bba4!
	I0813 20:09:43.755076       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"53184b6b-4e08-472f-b681-b7ac0756e47e", APIVersion:"v1", ResourceVersion:"877", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210813200811-30853_bb29066b-ba02-4727-889d-f5ff4246bba4 became leader
	I0813 20:09:43.837821       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210813200811-30853_bb29066b-ba02-4727-889d-f5ff4246bba4!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210813200811-30853 -n addons-20210813200811-30853
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210813200811-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210813200811-30853 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210813200811-30853 describe pod : exit status 1 (47.408743ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20210813200811-30853 describe pod : exit status 1
--- FAIL: TestAddons/parallel/Ingress (242.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (190.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- rollout status deployment/busybox
multinode_test.go:467: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- rollout status deployment/busybox: (5.505622826s)
multinode_test.go:473: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-g7sjs -- nslookup kubernetes.io
E0813 20:26:48.676159   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:26:53.558173   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 20:27:21.246399   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 20:27:29.637309   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-g7sjs -- nslookup kubernetes.io: exit status 1 (1m0.294687893s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:495: Pod busybox-84b6686758-g7sjs could not resolve 'kubernetes.io': exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-nfr5z -- nslookup kubernetes.io
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-g7sjs -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-g7sjs -- nslookup kubernetes.default: exit status 1 (1m0.311828828s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:505: Pod busybox-84b6686758-g7sjs could not resolve 'kubernetes.default': exit status 1
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-nfr5z -- nslookup kubernetes.default
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-g7sjs -- nslookup kubernetes.default.svc.cluster.local
E0813 20:28:51.557741   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
multinode_test.go:511: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-g7sjs -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (1m0.312669157s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:513: Pod busybox-84b6686758-g7sjs could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-nfr5z -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20210813202419-30853 -n multinode-20210813202419-30853
helpers_test.go:245: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202419-30853 logs -n 25: (1.306361106s)
helpers_test.go:253: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| Command |                              Args                              |                Profile                 |   User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20210813201821-30853                                | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:51 UTC | Fri, 13 Aug 2021 20:21:51 UTC |
	|         | update-context                                                 |                                        |          |         |                               |                               |
	|         | --alsologtostderr -v=2                                         |                                        |          |         |                               |                               |
	| -p      | functional-20210813201821-30853                                | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:51 UTC | Fri, 13 Aug 2021 20:21:51 UTC |
	|         | update-context                                                 |                                        |          |         |                               |                               |
	|         | --alsologtostderr -v=2                                         |                                        |          |         |                               |                               |
	| -p      | functional-20210813201821-30853                                | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:51 UTC | Fri, 13 Aug 2021 20:21:51 UTC |
	|         | version --short                                                |                                        |          |         |                               |                               |
	| -p      | functional-20210813201821-30853                                | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:51 UTC | Fri, 13 Aug 2021 20:21:52 UTC |
	|         | version -o=json --components                                   |                                        |          |         |                               |                               |
	| -p      | functional-20210813201821-30853 image load                     | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:52 UTC | Fri, 13 Aug 2021 20:21:52 UTC |
	|         | /home/jenkins/workspace/KVM_Linux_crio_integration/busybox.tar |                                        |          |         |                               |                               |
	| ssh     | -p                                                             | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:53 UTC | Fri, 13 Aug 2021 20:21:53 UTC |
	|         | functional-20210813201821-30853                                |                                        |          |         |                               |                               |
	|         | -- sudo crictl images                                          |                                        |          |         |                               |                               |
	| -p      | functional-20210813201821-30853                                | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:54 UTC | Fri, 13 Aug 2021 20:21:54 UTC |
	|         | ssh stat                                                       |                                        |          |         |                               |                               |
	|         | /mount-9p/created-by-test                                      |                                        |          |         |                               |                               |
	| -p      | functional-20210813201821-30853                                | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:54 UTC | Fri, 13 Aug 2021 20:21:54 UTC |
	|         | ssh stat                                                       |                                        |          |         |                               |                               |
	|         | /mount-9p/created-by-pod                                       |                                        |          |         |                               |                               |
	| -p      | functional-20210813201821-30853                                | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:54 UTC | Fri, 13 Aug 2021 20:21:54 UTC |
	|         | ssh sudo umount -f /mount-9p                                   |                                        |          |         |                               |                               |
	| -p      | functional-20210813201821-30853                                | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:55 UTC | Fri, 13 Aug 2021 20:21:55 UTC |
	|         | ssh findmnt -T /mount-9p | grep                                |                                        |          |         |                               |                               |
	|         | 9p                                                             |                                        |          |         |                               |                               |
	| -p      | functional-20210813201821-30853                                | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:55 UTC | Fri, 13 Aug 2021 20:21:56 UTC |
	|         | ssh -- ls -la /mount-9p                                        |                                        |          |         |                               |                               |
	| delete  | -p                                                             | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:22:18 UTC | Fri, 13 Aug 2021 20:22:19 UTC |
	|         | functional-20210813201821-30853                                |                                        |          |         |                               |                               |
	| start   | -p                                                             | json-output-20210813202219-30853       | testUser | v1.22.0 | Fri, 13 Aug 2021 20:22:19 UTC | Fri, 13 Aug 2021 20:24:06 UTC |
	|         | json-output-20210813202219-30853                               |                                        |          |         |                               |                               |
	|         | --output=json --user=testUser                                  |                                        |          |         |                               |                               |
	|         | --memory=2200 --wait=true                                      |                                        |          |         |                               |                               |
	|         | --driver=kvm2                                                  |                                        |          |         |                               |                               |
	|         | --container-runtime=crio                                       |                                        |          |         |                               |                               |
	| unpause | -p                                                             | json-output-20210813202219-30853       | testUser | v1.22.0 | Fri, 13 Aug 2021 20:24:09 UTC | Fri, 13 Aug 2021 20:24:09 UTC |
	|         | json-output-20210813202219-30853                               |                                        |          |         |                               |                               |
	|         | --output=json --user=testUser                                  |                                        |          |         |                               |                               |
	| stop    | -p                                                             | json-output-20210813202219-30853       | testUser | v1.22.0 | Fri, 13 Aug 2021 20:24:09 UTC | Fri, 13 Aug 2021 20:24:17 UTC |
	|         | json-output-20210813202219-30853                               |                                        |          |         |                               |                               |
	|         | --output=json --user=testUser                                  |                                        |          |         |                               |                               |
	| delete  | -p                                                             | json-output-20210813202219-30853       | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:24:17 UTC | Fri, 13 Aug 2021 20:24:18 UTC |
	|         | json-output-20210813202219-30853                               |                                        |          |         |                               |                               |
	| delete  | -p                                                             | json-output-error-20210813202418-30853 | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:24:18 UTC | Fri, 13 Aug 2021 20:24:18 UTC |
	|         | json-output-error-20210813202418-30853                         |                                        |          |         |                               |                               |
	| start   | -p                                                             | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:24:19 UTC | Fri, 13 Aug 2021 20:26:33 UTC |
	|         | multinode-20210813202419-30853                                 |                                        |          |         |                               |                               |
	|         | --wait=true --memory=2200                                      |                                        |          |         |                               |                               |
	|         | --nodes=2 -v=8                                                 |                                        |          |         |                               |                               |
	|         | --alsologtostderr                                              |                                        |          |         |                               |                               |
	|         | --driver=kvm2                                                  |                                        |          |         |                               |                               |
	|         | --container-runtime=crio                                       |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202419-30853 -- apply -f                  | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:34 UTC | Fri, 13 Aug 2021 20:26:34 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml              |                                        |          |         |                               |                               |
	| kubectl | -p                                                             | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:34 UTC | Fri, 13 Aug 2021 20:26:40 UTC |
	|         | multinode-20210813202419-30853                                 |                                        |          |         |                               |                               |
	|         | -- rollout status                                              |                                        |          |         |                               |                               |
	|         | deployment/busybox                                             |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202419-30853                              | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:40 UTC | Fri, 13 Aug 2021 20:26:40 UTC |
	|         | -- get pods -o                                                 |                                        |          |         |                               |                               |
	|         | jsonpath='{.items[*].status.podIP}'                            |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202419-30853                              | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:40 UTC | Fri, 13 Aug 2021 20:26:40 UTC |
	|         | -- get pods -o                                                 |                                        |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'                           |                                        |          |         |                               |                               |
	| kubectl | -p                                                             | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:27:40 UTC | Fri, 13 Aug 2021 20:27:41 UTC |
	|         | multinode-20210813202419-30853                                 |                                        |          |         |                               |                               |
	|         | -- exec                                                        |                                        |          |         |                               |                               |
	|         | busybox-84b6686758-nfr5z --                                    |                                        |          |         |                               |                               |
	|         | nslookup kubernetes.io                                         |                                        |          |         |                               |                               |
	| kubectl | -p                                                             | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:28:41 UTC | Fri, 13 Aug 2021 20:28:41 UTC |
	|         | multinode-20210813202419-30853                                 |                                        |          |         |                               |                               |
	|         | -- exec                                                        |                                        |          |         |                               |                               |
	|         | busybox-84b6686758-nfr5z --                                    |                                        |          |         |                               |                               |
	|         | nslookup kubernetes.default                                    |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202419-30853                              | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:29:42 UTC | Fri, 13 Aug 2021 20:29:42 UTC |
	|         | -- exec busybox-84b6686758-nfr5z                               |                                        |          |         |                               |                               |
	|         | -- nslookup                                                    |                                        |          |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local                           |                                        |          |         |                               |                               |
	|---------|----------------------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:24:19
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:24:19.061564    4908 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:24:19.061854    4908 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:24:19.061864    4908 out.go:311] Setting ErrFile to fd 2...
	I0813 20:24:19.061870    4908 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:24:19.062119    4908 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:24:19.062551    4908 out.go:305] Setting JSON to false
	I0813 20:24:19.097379    4908 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":7621,"bootTime":1628878638,"procs":151,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:24:19.097491    4908 start.go:121] virtualization: kvm guest
	I0813 20:24:19.099820    4908 out.go:177] * [multinode-20210813202419-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:24:19.101377    4908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:24:19.099976    4908 notify.go:169] Checking for updates...
	I0813 20:24:19.102828    4908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:24:19.104123    4908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:24:19.105401    4908 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:24:19.105587    4908 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:24:19.133777    4908 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 20:24:19.133802    4908 start.go:278] selected driver: kvm2
	I0813 20:24:19.133809    4908 start.go:751] validating driver "kvm2" against <nil>
	I0813 20:24:19.133825    4908 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:24:19.134797    4908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:24:19.134995    4908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:24:19.145532    4908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:24:19.145616    4908 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:24:19.145753    4908 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:24:19.145783    4908 cni.go:93] Creating CNI manager for ""
	I0813 20:24:19.145790    4908 cni.go:154] 0 nodes found, recommending kindnet
	I0813 20:24:19.145802    4908 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:24:19.145816    4908 start_flags.go:277] config:
	{Name:multinode-20210813202419-30853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813202419-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 20:24:19.145903    4908 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:24:19.147745    4908 out.go:177] * Starting control plane node multinode-20210813202419-30853 in cluster multinode-20210813202419-30853
	I0813 20:24:19.147764    4908 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:24:19.147788    4908 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:24:19.147833    4908 cache.go:56] Caching tarball of preloaded images
	I0813 20:24:19.147916    4908 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:24:19.147933    4908 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:24:19.148220    4908 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/config.json ...
	I0813 20:24:19.148247    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/config.json: {Name:mk17167fb279b033724517938130069093c08bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:24:19.148372    4908 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:24:19.148399    4908 start.go:313] acquiring machines lock for multinode-20210813202419-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:24:19.148440    4908 start.go:317] acquired machines lock for "multinode-20210813202419-30853" in 25.188µs
	I0813 20:24:19.148460    4908 start.go:89] Provisioning new machine with config: &{Name:multinode-20210813202419-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.21.3 ClusterName:multinode-20210813202419-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:24:19.148527    4908 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 20:24:19.150306    4908 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 20:24:19.150406    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:24:19.150445    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:24:19.160407    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0813 20:24:19.160846    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:24:19.161473    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:24:19.161494    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:24:19.161818    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:24:19.161997    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetMachineName
	I0813 20:24:19.162159    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:19.162292    4908 start.go:160] libmachine.API.Create for "multinode-20210813202419-30853" (driver="kvm2")
	I0813 20:24:19.162325    4908 client.go:168] LocalClient.Create starting
	I0813 20:24:19.162368    4908 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:24:19.162428    4908 main.go:130] libmachine: Decoding PEM data...
	I0813 20:24:19.162451    4908 main.go:130] libmachine: Parsing certificate...
	I0813 20:24:19.162609    4908 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:24:19.162633    4908 main.go:130] libmachine: Decoding PEM data...
	I0813 20:24:19.162653    4908 main.go:130] libmachine: Parsing certificate...
	I0813 20:24:19.162707    4908 main.go:130] libmachine: Running pre-create checks...
	I0813 20:24:19.162723    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .PreCreateCheck
	I0813 20:24:19.163065    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetConfigRaw
	I0813 20:24:19.163453    4908 main.go:130] libmachine: Creating machine...
	I0813 20:24:19.163468    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .Create
	I0813 20:24:19.163590    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Creating KVM machine...
	I0813 20:24:19.166176    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found existing default KVM network
	I0813 20:24:19.167250    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:19.167105    4931 network.go:288] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc0000a85d0] misses:0}
	I0813 20:24:19.167288    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:19.167200    4931 network.go:235] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:24:19.188898    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | trying to create private KVM network mk-multinode-20210813202419-30853 192.168.39.0/24...
	I0813 20:24:19.455247    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | private KVM network mk-multinode-20210813202419-30853 192.168.39.0/24 created
	I0813 20:24:19.455284    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853 ...
	I0813 20:24:19.455309    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:19.455217    4931 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:24:19.455351    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 20:24:19.455439    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 20:24:19.635198    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:19.635089    4931 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa...
	I0813 20:24:19.793160    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:19.793049    4931 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/multinode-20210813202419-30853.rawdisk...
	I0813 20:24:19.793197    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Writing magic tar header
	I0813 20:24:19.793212    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Writing SSH key tar header
	I0813 20:24:19.793227    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:19.793190    4931 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853 ...
	I0813 20:24:19.793370    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853
	I0813 20:24:19.793415    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 20:24:19.793441    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853 (perms=drwx------)
	I0813 20:24:19.793479    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 20:24:19.793498    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:24:19.793514    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 20:24:19.793527    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337
	I0813 20:24:19.793541    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 20:24:19.793555    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Checking permissions on dir: /home/jenkins
	I0813 20:24:19.793567    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Checking permissions on dir: /home
	I0813 20:24:19.793579    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Skipping /home - not owner
	I0813 20:24:19.793598    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 20:24:19.793616    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 20:24:19.793625    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 20:24:19.793634    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Creating domain...
	I0813 20:24:19.818640    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:87:3c:24 in network default
	I0813 20:24:19.819123    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Ensuring networks are active...
	I0813 20:24:19.819168    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:19.820954    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Ensuring network default is active
	I0813 20:24:19.821267    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Ensuring network mk-multinode-20210813202419-30853 is active
	I0813 20:24:19.821773    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Getting domain xml...
	I0813 20:24:19.823495    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Creating domain...
	I0813 20:24:20.266043    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Waiting to get IP...
	I0813 20:24:20.266736    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:20.267269    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:20.267288    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:20.267227    4931 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 20:24:20.531381    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:20.531825    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:20.531847    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:20.531787    4931 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 20:24:20.914176    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:20.914608    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:20.914642    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:20.914550    4931 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 20:24:21.339152    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:21.339573    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:21.339600    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:21.339513    4931 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 20:24:21.813990    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:21.814505    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:21.814535    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:21.814453    4931 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 20:24:22.403267    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:22.403697    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:22.403720    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:22.403665    4931 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 20:24:23.238942    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:23.239305    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:23.239337    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:23.239266    4931 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 20:24:23.987123    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:23.987594    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:23.987625    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:23.987526    4931 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 20:24:24.975907    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:24.976378    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:24.976410    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:24.976302    4931 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 20:24:26.167579    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:26.168038    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:26.168068    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:26.167984    4931 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 20:24:27.847780    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:27.848253    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:27.848285    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:27.848180    4931 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 20:24:30.195294    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:30.195832    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:30.195866    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:30.195763    4931 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 20:24:33.566130    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.566593    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Found IP for machine: 192.168.39.64
	I0813 20:24:33.566617    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Reserving static IP address...
	I0813 20:24:33.566631    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has current primary IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.566933    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find host DHCP lease matching {name: "multinode-20210813202419-30853", mac: "52:54:00:16:ef:64", ip: "192.168.39.64"} in network mk-multinode-20210813202419-30853
	I0813 20:24:33.613374    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Getting to WaitForSSH function...
	I0813 20:24:33.613407    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Reserved static IP address: 192.168.39.64
	I0813 20:24:33.613422    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Waiting for SSH to be available...
	I0813 20:24:33.618672    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.619035    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:minikube Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:33.619071    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.619162    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Using SSH client type: external
	I0813 20:24:33.619198    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa (-rw-------)
	I0813 20:24:33.619239    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 20:24:33.619256    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | About to run SSH command:
	I0813 20:24:33.619287    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | exit 0
	I0813 20:24:33.750338    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 20:24:33.750831    4908 main.go:130] libmachine: (multinode-20210813202419-30853) KVM machine creation complete!
	I0813 20:24:33.750910    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetConfigRaw
	I0813 20:24:33.751457    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:33.751637    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:33.751796    4908 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 20:24:33.751815    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetState
	I0813 20:24:33.754204    4908 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 20:24:33.754217    4908 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 20:24:33.754223    4908 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 20:24:33.754230    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:33.758462    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.758775    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:33.758808    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.758928    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:33.759077    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:33.759207    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:33.759302    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:33.759425    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:24:33.759645    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0813 20:24:33.759659    4908 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 20:24:33.878033    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:24:33.878066    4908 main.go:130] libmachine: Detecting the provisioner...
	I0813 20:24:33.878078    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:33.883049    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.883397    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:33.883436    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.883491    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:33.883659    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:33.883806    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:33.883931    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:33.884070    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:24:33.884248    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0813 20:24:33.884263    4908 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 20:24:34.003403    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 20:24:34.003491    4908 main.go:130] libmachine: found compatible host: buildroot
	I0813 20:24:34.003508    4908 main.go:130] libmachine: Provisioning with buildroot...
	I0813 20:24:34.003520    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetMachineName
	I0813 20:24:34.003750    4908 buildroot.go:166] provisioning hostname "multinode-20210813202419-30853"
	I0813 20:24:34.003775    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetMachineName
	I0813 20:24:34.003937    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:34.009088    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.009448    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:34.009484    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.009595    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:34.009749    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:34.009913    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:34.010047    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:34.010191    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:24:34.010374    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0813 20:24:34.010394    4908 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210813202419-30853 && echo "multinode-20210813202419-30853" | sudo tee /etc/hostname
	I0813 20:24:34.139277    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210813202419-30853
	
	I0813 20:24:34.139301    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:34.144096    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.144392    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:34.144422    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.144565    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:34.144746    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:34.144868    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:34.144992    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:34.145110    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:24:34.145272    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0813 20:24:34.145300    4908 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210813202419-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210813202419-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210813202419-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:24:34.268587    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:24:34.268621    4908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:24:34.268648    4908 buildroot.go:174] setting up certificates
	I0813 20:24:34.268660    4908 provision.go:83] configureAuth start
	I0813 20:24:34.268671    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetMachineName
	I0813 20:24:34.268934    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetIP
	I0813 20:24:34.273903    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.274197    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:34.274222    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.274304    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:34.278459    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.278737    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:34.278767    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.278809    4908 provision.go:138] copyHostCerts
	I0813 20:24:34.278842    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:24:34.278906    4908 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:24:34.278919    4908 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:24:34.278981    4908 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:24:34.279046    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:24:34.279066    4908 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:24:34.279073    4908 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:24:34.279097    4908 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:24:34.279134    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:24:34.279152    4908 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:24:34.279159    4908 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:24:34.279176    4908 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:24:34.279216    4908 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.multinode-20210813202419-30853 san=[192.168.39.64 192.168.39.64 localhost 127.0.0.1 minikube multinode-20210813202419-30853]
	I0813 20:24:34.442793    4908 provision.go:172] copyRemoteCerts
	I0813 20:24:34.442870    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:24:34.442898    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:34.448214    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.448500    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:34.448537    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.448680    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:34.448866    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:34.449000    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:34.449121    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:24:34.534014    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0813 20:24:34.534071    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:24:34.549882    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0813 20:24:34.549923    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0813 20:24:34.565408    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0813 20:24:34.565453    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:24:34.581923    4908 provision.go:86] duration metric: configureAuth took 313.252596ms
	I0813 20:24:34.581944    4908 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:24:34.582138    4908 config.go:177] Loaded profile config "multinode-20210813202419-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:24:34.582242    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:34.586926    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.587237    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:34.587277    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.587376    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:34.587530    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:34.587654    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:34.587749    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:34.587859    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:24:34.588004    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0813 20:24:34.588026    4908 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:24:35.276577    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:24:35.276616    4908 main.go:130] libmachine: Checking connection to Docker...
	I0813 20:24:35.276630    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetURL
	I0813 20:24:35.279350    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Using libvirt version 3000000
	I0813 20:24:35.283656    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.283931    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:35.283964    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.284125    4908 main.go:130] libmachine: Docker is up and running!
	I0813 20:24:35.284142    4908 main.go:130] libmachine: Reticulating splines...
	I0813 20:24:35.284150    4908 client.go:171] LocalClient.Create took 16.121814174s
	I0813 20:24:35.284168    4908 start.go:168] duration metric: libmachine.API.Create for "multinode-20210813202419-30853" took 16.121878034s
	I0813 20:24:35.284175    4908 start.go:267] post-start starting for "multinode-20210813202419-30853" (driver="kvm2")
	I0813 20:24:35.284183    4908 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:24:35.284200    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:35.284445    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:24:35.284473    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:35.288791    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.289071    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:35.289105    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.289203    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:35.289371    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:35.289538    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:35.289728    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:24:35.374303    4908 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:24:35.378640    4908 command_runner.go:124] > NAME=Buildroot
	I0813 20:24:35.378660    4908 command_runner.go:124] > VERSION=2020.02.12
	I0813 20:24:35.378667    4908 command_runner.go:124] > ID=buildroot
	I0813 20:24:35.378673    4908 command_runner.go:124] > VERSION_ID=2020.02.12
	I0813 20:24:35.378681    4908 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0813 20:24:35.378899    4908 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:24:35.378921    4908 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:24:35.378973    4908 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:24:35.379077    4908 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 20:24:35.379091    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> /etc/ssl/certs/308532.pem
	I0813 20:24:35.379194    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:24:35.385603    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:24:35.402228    4908 start.go:270] post-start completed in 118.036342ms
	I0813 20:24:35.402278    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetConfigRaw
	I0813 20:24:35.402846    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetIP
	I0813 20:24:35.407835    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.408144    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:35.408173    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.408459    4908 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/config.json ...
	I0813 20:24:35.408672    4908 start.go:129] duration metric: createHost completed in 16.260135775s
	I0813 20:24:35.408688    4908 start.go:80] releasing machines lock for "multinode-20210813202419-30853", held for 16.260236178s
	I0813 20:24:35.408724    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:35.408924    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetIP
	I0813 20:24:35.413018    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.413273    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:35.413302    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.413443    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:35.413611    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:35.414029    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:35.414255    4908 ssh_runner.go:149] Run: systemctl --version
	I0813 20:24:35.414282    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:35.414301    4908 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:24:35.414345    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:35.418862    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.419229    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:35.419257    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.419326    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:35.419488    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:35.419647    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:35.419775    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:24:35.419937    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.420236    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:35.420262    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.420385    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:35.420521    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:35.420681    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:35.420796    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:24:35.522808    4908 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0813 20:24:35.522840    4908 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0813 20:24:35.522848    4908 command_runner.go:124] > <H1>302 Moved</H1>
	I0813 20:24:35.522867    4908 command_runner.go:124] > The document has moved
	I0813 20:24:35.522877    4908 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0813 20:24:35.522883    4908 command_runner.go:124] > </BODY></HTML>
	I0813 20:24:35.522927    4908 command_runner.go:124] > systemd 244 (244)
	I0813 20:24:35.522951    4908 command_runner.go:124] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0813 20:24:35.522972    4908 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:24:35.523085    4908 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:24:35.546257    4908 command_runner.go:124] ! time="2021-08-13T20:24:35Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0813 20:24:37.531845    4908 command_runner.go:124] ! time="2021-08-13T20:24:37Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 20:24:39.519439    4908 command_runner.go:124] ! time="2021-08-13T20:24:39Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 20:24:39.524339    4908 command_runner.go:124] > {
	I0813 20:24:39.524356    4908 command_runner.go:124] >   "images": [
	I0813 20:24:39.524360    4908 command_runner.go:124] >   ]
	I0813 20:24:39.524363    4908 command_runner.go:124] > }
	I0813 20:24:39.524380    4908 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.001278627s)
	I0813 20:24:39.524462    4908 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 20:24:39.524509    4908 ssh_runner.go:149] Run: which lz4
	I0813 20:24:39.528376    4908 command_runner.go:124] > /bin/lz4
	I0813 20:24:39.528629    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0813 20:24:39.528711    4908 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 20:24:39.532551    4908 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:24:39.532975    4908 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:24:39.533004    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0813 20:24:42.603774    4908 crio.go:362] Took 3.075096 seconds to copy over tarball
	I0813 20:24:42.603862    4908 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 20:24:47.556783    4908 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.952891602s)
	I0813 20:24:47.556814    4908 crio.go:369] Took 4.953008 seconds t extract the tarball
	I0813 20:24:47.556824    4908 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 20:24:47.596055    4908 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:24:47.609155    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:24:47.620074    4908 docker.go:153] disabling docker service ...
	I0813 20:24:47.620124    4908 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:24:47.631270    4908 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:24:47.640076    4908 command_runner.go:124] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0813 20:24:47.640166    4908 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:24:47.809318    4908 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0813 20:24:47.809395    4908 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:24:47.944580    4908 command_runner.go:124] ! Unit docker.service does not exist, proceeding anyway.
	I0813 20:24:47.944607    4908 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0813 20:24:47.944664    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:24:47.955484    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:24:47.968128    4908 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0813 20:24:47.968150    4908 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0813 20:24:47.968180    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:24:47.976181    4908 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:24:47.976208    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:24:47.983629    4908 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:24:47.989652    4908 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:24:47.989883    4908 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:24:47.989919    4908 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:24:48.005172    4908 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:24:48.011478    4908 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:24:48.136091    4908 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:24:48.244318    4908 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:24:48.244400    4908 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:24:48.249298    4908 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0813 20:24:48.249316    4908 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0813 20:24:48.249323    4908 command_runner.go:124] > Device: 14h/20d	Inode: 28443       Links: 1
	I0813 20:24:48.249330    4908 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 20:24:48.249335    4908 command_runner.go:124] > Access: 2021-08-13 20:24:39.469005242 +0000
	I0813 20:24:48.249341    4908 command_runner.go:124] > Modify: 2021-08-13 20:24:35.170685970 +0000
	I0813 20:24:48.249346    4908 command_runner.go:124] > Change: 2021-08-13 20:24:35.170685970 +0000
	I0813 20:24:48.249351    4908 command_runner.go:124] >  Birth: -
	I0813 20:24:48.249623    4908 start.go:413] Will wait 60s for crictl version
	I0813 20:24:48.249661    4908 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:24:48.285188    4908 command_runner.go:124] > Version:  0.1.0
	I0813 20:24:48.285424    4908 command_runner.go:124] > RuntimeName:  cri-o
	I0813 20:24:48.285458    4908 command_runner.go:124] > RuntimeVersion:  1.20.2
	I0813 20:24:48.285563    4908 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0813 20:24:48.287393    4908 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 20:24:48.287482    4908 ssh_runner.go:149] Run: crio --version
	I0813 20:24:48.502168    4908 command_runner.go:124] > crio version 1.20.2
	I0813 20:24:48.502197    4908 command_runner.go:124] > Version:       1.20.2
	I0813 20:24:48.502207    4908 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 20:24:48.502213    4908 command_runner.go:124] > GitTreeState:  clean
	I0813 20:24:48.502223    4908 command_runner.go:124] > BuildDate:     2021-08-10T19:57:38Z
	I0813 20:24:48.502230    4908 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 20:24:48.502236    4908 command_runner.go:124] > Compiler:      gc
	I0813 20:24:48.502243    4908 command_runner.go:124] > Platform:      linux/amd64
	I0813 20:24:48.503436    4908 command_runner.go:124] ! time="2021-08-13T20:24:48Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 20:24:48.503524    4908 ssh_runner.go:149] Run: crio --version
	I0813 20:24:48.787084    4908 command_runner.go:124] > crio version 1.20.2
	I0813 20:24:48.787106    4908 command_runner.go:124] > Version:       1.20.2
	I0813 20:24:48.787115    4908 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 20:24:48.787120    4908 command_runner.go:124] > GitTreeState:  clean
	I0813 20:24:48.787127    4908 command_runner.go:124] > BuildDate:     2021-08-10T19:57:38Z
	I0813 20:24:48.787132    4908 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 20:24:48.787136    4908 command_runner.go:124] > Compiler:      gc
	I0813 20:24:48.787141    4908 command_runner.go:124] > Platform:      linux/amd64
	I0813 20:24:48.788464    4908 command_runner.go:124] ! time="2021-08-13T20:24:48Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 20:24:50.150742    4908 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 20:24:50.151102    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetIP
	I0813 20:24:50.156784    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:50.157038    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:50.157071    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:50.157267    4908 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 20:24:50.162427    4908 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:24:50.174514    4908 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:24:50.174569    4908 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:24:50.247001    4908 command_runner.go:124] > {
	I0813 20:24:50.247031    4908 command_runner.go:124] >   "images": [
	I0813 20:24:50.247038    4908 command_runner.go:124] >     {
	I0813 20:24:50.247050    4908 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0813 20:24:50.247057    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247067    4908 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0813 20:24:50.247073    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247079    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247102    4908 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0813 20:24:50.247115    4908 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0813 20:24:50.247123    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247138    4908 command_runner.go:124] >       "size": "119984626",
	I0813 20:24:50.247148    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.247154    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247162    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247167    4908 command_runner.go:124] >     },
	I0813 20:24:50.247173    4908 command_runner.go:124] >     {
	I0813 20:24:50.247184    4908 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0813 20:24:50.247191    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247200    4908 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0813 20:24:50.247205    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247213    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247225    4908 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0813 20:24:50.247240    4908 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0813 20:24:50.247249    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247256    4908 command_runner.go:124] >       "size": "228528983",
	I0813 20:24:50.247263    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.247270    4908 command_runner.go:124] >       "username": "nonroot",
	I0813 20:24:50.247292    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247300    4908 command_runner.go:124] >     },
	I0813 20:24:50.247305    4908 command_runner.go:124] >     {
	I0813 20:24:50.247312    4908 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0813 20:24:50.247317    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247323    4908 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0813 20:24:50.247327    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247331    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247341    4908 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0813 20:24:50.247349    4908 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0813 20:24:50.247358    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247363    4908 command_runner.go:124] >       "size": "36950651",
	I0813 20:24:50.247367    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.247373    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247376    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247380    4908 command_runner.go:124] >     },
	I0813 20:24:50.247383    4908 command_runner.go:124] >     {
	I0813 20:24:50.247390    4908 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0813 20:24:50.247395    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247400    4908 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0813 20:24:50.247403    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247408    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247420    4908 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0813 20:24:50.247431    4908 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0813 20:24:50.247434    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247439    4908 command_runner.go:124] >       "size": "31470524",
	I0813 20:24:50.247445    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.247450    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247454    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247457    4908 command_runner.go:124] >     },
	I0813 20:24:50.247460    4908 command_runner.go:124] >     {
	I0813 20:24:50.247467    4908 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0813 20:24:50.247472    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247477    4908 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0813 20:24:50.247481    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247485    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247492    4908 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0813 20:24:50.247501    4908 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0813 20:24:50.247504    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247510    4908 command_runner.go:124] >       "size": "42585056",
	I0813 20:24:50.247514    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.247518    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247523    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247526    4908 command_runner.go:124] >     },
	I0813 20:24:50.247530    4908 command_runner.go:124] >     {
	I0813 20:24:50.247536    4908 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0813 20:24:50.247541    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247546    4908 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0813 20:24:50.247549    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247553    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247562    4908 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0813 20:24:50.247570    4908 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0813 20:24:50.247574    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247578    4908 command_runner.go:124] >       "size": "254662613",
	I0813 20:24:50.247582    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.247586    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247590    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247593    4908 command_runner.go:124] >     },
	I0813 20:24:50.247597    4908 command_runner.go:124] >     {
	I0813 20:24:50.247603    4908 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0813 20:24:50.247608    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247618    4908 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0813 20:24:50.247623    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247628    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247637    4908 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0813 20:24:50.247645    4908 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0813 20:24:50.247649    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247653    4908 command_runner.go:124] >       "size": "126878961",
	I0813 20:24:50.247657    4908 command_runner.go:124] >       "uid": {
	I0813 20:24:50.247661    4908 command_runner.go:124] >         "value": "0"
	I0813 20:24:50.247665    4908 command_runner.go:124] >       },
	I0813 20:24:50.247669    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247672    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247676    4908 command_runner.go:124] >     },
	I0813 20:24:50.247679    4908 command_runner.go:124] >     {
	I0813 20:24:50.247688    4908 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0813 20:24:50.247695    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247703    4908 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0813 20:24:50.247711    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247718    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247731    4908 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0813 20:24:50.247746    4908 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0813 20:24:50.247751    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247763    4908 command_runner.go:124] >       "size": "121087578",
	I0813 20:24:50.247768    4908 command_runner.go:124] >       "uid": {
	I0813 20:24:50.247775    4908 command_runner.go:124] >         "value": "0"
	I0813 20:24:50.247780    4908 command_runner.go:124] >       },
	I0813 20:24:50.247831    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247840    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247844    4908 command_runner.go:124] >     },
	I0813 20:24:50.247847    4908 command_runner.go:124] >     {
	I0813 20:24:50.247854    4908 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0813 20:24:50.247858    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247863    4908 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0813 20:24:50.247866    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247871    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247880    4908 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0813 20:24:50.247887    4908 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0813 20:24:50.247892    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247896    4908 command_runner.go:124] >       "size": "105129702",
	I0813 20:24:50.247904    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.247912    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247917    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247922    4908 command_runner.go:124] >     },
	I0813 20:24:50.247925    4908 command_runner.go:124] >     {
	I0813 20:24:50.247931    4908 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0813 20:24:50.247936    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247941    4908 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0813 20:24:50.247944    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247948    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247956    4908 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0813 20:24:50.247964    4908 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0813 20:24:50.247968    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247972    4908 command_runner.go:124] >       "size": "51893338",
	I0813 20:24:50.247976    4908 command_runner.go:124] >       "uid": {
	I0813 20:24:50.247979    4908 command_runner.go:124] >         "value": "0"
	I0813 20:24:50.247983    4908 command_runner.go:124] >       },
	I0813 20:24:50.247987    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247991    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247994    4908 command_runner.go:124] >     },
	I0813 20:24:50.247997    4908 command_runner.go:124] >     {
	I0813 20:24:50.248004    4908 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0813 20:24:50.248008    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.248013    4908 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0813 20:24:50.248016    4908 command_runner.go:124] >       ],
	I0813 20:24:50.248020    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.248027    4908 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0813 20:24:50.248035    4908 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0813 20:24:50.248039    4908 command_runner.go:124] >       ],
	I0813 20:24:50.248043    4908 command_runner.go:124] >       "size": "689817",
	I0813 20:24:50.248047    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.248051    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.248055    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.248058    4908 command_runner.go:124] >     }
	I0813 20:24:50.248061    4908 command_runner.go:124] >   ]
	I0813 20:24:50.248064    4908 command_runner.go:124] > }
	I0813 20:24:50.248208    4908 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:24:50.248223    4908 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:24:50.248290    4908 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:24:50.281549    4908 command_runner.go:124] > {
	I0813 20:24:50.281590    4908 command_runner.go:124] >   "images": [
	I0813 20:24:50.281597    4908 command_runner.go:124] >     {
	I0813 20:24:50.281610    4908 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0813 20:24:50.281618    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.281629    4908 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0813 20:24:50.281637    4908 command_runner.go:124] >       ],
	I0813 20:24:50.281645    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.281663    4908 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0813 20:24:50.281680    4908 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0813 20:24:50.281687    4908 command_runner.go:124] >       ],
	I0813 20:24:50.281695    4908 command_runner.go:124] >       "size": "119984626",
	I0813 20:24:50.281704    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.281712    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.281729    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.281738    4908 command_runner.go:124] >     },
	I0813 20:24:50.281744    4908 command_runner.go:124] >     {
	I0813 20:24:50.281755    4908 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0813 20:24:50.281766    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.281775    4908 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0813 20:24:50.281785    4908 command_runner.go:124] >       ],
	I0813 20:24:50.281796    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.281813    4908 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0813 20:24:50.281830    4908 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0813 20:24:50.281838    4908 command_runner.go:124] >       ],
	I0813 20:24:50.281846    4908 command_runner.go:124] >       "size": "228528983",
	I0813 20:24:50.281855    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.281863    4908 command_runner.go:124] >       "username": "nonroot",
	I0813 20:24:50.281905    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.281933    4908 command_runner.go:124] >     },
	I0813 20:24:50.281940    4908 command_runner.go:124] >     {
	I0813 20:24:50.281968    4908 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0813 20:24:50.281980    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.281990    4908 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0813 20:24:50.282001    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282009    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.282027    4908 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0813 20:24:50.282046    4908 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0813 20:24:50.282055    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282066    4908 command_runner.go:124] >       "size": "36950651",
	I0813 20:24:50.282082    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.282092    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.282099    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.282109    4908 command_runner.go:124] >     },
	I0813 20:24:50.282116    4908 command_runner.go:124] >     {
	I0813 20:24:50.282131    4908 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0813 20:24:50.282140    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.282148    4908 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0813 20:24:50.282156    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282163    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.282182    4908 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0813 20:24:50.282199    4908 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0813 20:24:50.282207    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282215    4908 command_runner.go:124] >       "size": "31470524",
	I0813 20:24:50.282228    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.282238    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.282246    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.282254    4908 command_runner.go:124] >     },
	I0813 20:24:50.282260    4908 command_runner.go:124] >     {
	I0813 20:24:50.282273    4908 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0813 20:24:50.282285    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.282297    4908 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0813 20:24:50.282304    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282313    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.282326    4908 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0813 20:24:50.282344    4908 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0813 20:24:50.282353    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282361    4908 command_runner.go:124] >       "size": "42585056",
	I0813 20:24:50.282372    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.282380    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.282392    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.282398    4908 command_runner.go:124] >     },
	I0813 20:24:50.282406    4908 command_runner.go:124] >     {
	I0813 20:24:50.282417    4908 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0813 20:24:50.282429    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.282438    4908 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0813 20:24:50.282450    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282458    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.282474    4908 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0813 20:24:50.282495    4908 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0813 20:24:50.282505    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282516    4908 command_runner.go:124] >       "size": "254662613",
	I0813 20:24:50.282527    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.282534    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.282543    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.282549    4908 command_runner.go:124] >     },
	I0813 20:24:50.282557    4908 command_runner.go:124] >     {
	I0813 20:24:50.282568    4908 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0813 20:24:50.282580    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.282588    4908 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0813 20:24:50.282598    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282606    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.282623    4908 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0813 20:24:50.282638    4908 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0813 20:24:50.282644    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282654    4908 command_runner.go:124] >       "size": "126878961",
	I0813 20:24:50.282661    4908 command_runner.go:124] >       "uid": {
	I0813 20:24:50.282672    4908 command_runner.go:124] >         "value": "0"
	I0813 20:24:50.282679    4908 command_runner.go:124] >       },
	I0813 20:24:50.282689    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.282701    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.282707    4908 command_runner.go:124] >     },
	I0813 20:24:50.282717    4908 command_runner.go:124] >     {
	I0813 20:24:50.282729    4908 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0813 20:24:50.282739    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.282749    4908 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0813 20:24:50.282757    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282765    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.282782    4908 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0813 20:24:50.282798    4908 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0813 20:24:50.282807    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282880    4908 command_runner.go:124] >       "size": "121087578",
	I0813 20:24:50.282896    4908 command_runner.go:124] >       "uid": {
	I0813 20:24:50.282904    4908 command_runner.go:124] >         "value": "0"
	I0813 20:24:50.282910    4908 command_runner.go:124] >       },
	I0813 20:24:50.282956    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.282968    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.282975    4908 command_runner.go:124] >     },
	I0813 20:24:50.282989    4908 command_runner.go:124] >     {
	I0813 20:24:50.283005    4908 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0813 20:24:50.283014    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.283022    4908 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0813 20:24:50.283031    4908 command_runner.go:124] >       ],
	I0813 20:24:50.283038    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.283054    4908 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0813 20:24:50.283071    4908 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0813 20:24:50.283082    4908 command_runner.go:124] >       ],
	I0813 20:24:50.283091    4908 command_runner.go:124] >       "size": "105129702",
	I0813 20:24:50.283103    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.283110    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.283119    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.283125    4908 command_runner.go:124] >     },
	I0813 20:24:50.283133    4908 command_runner.go:124] >     {
	I0813 20:24:50.283144    4908 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0813 20:24:50.283156    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.283165    4908 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0813 20:24:50.283175    4908 command_runner.go:124] >       ],
	I0813 20:24:50.283183    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.283207    4908 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0813 20:24:50.283219    4908 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0813 20:24:50.283228    4908 command_runner.go:124] >       ],
	I0813 20:24:50.283236    4908 command_runner.go:124] >       "size": "51893338",
	I0813 20:24:50.283242    4908 command_runner.go:124] >       "uid": {
	I0813 20:24:50.283250    4908 command_runner.go:124] >         "value": "0"
	I0813 20:24:50.283258    4908 command_runner.go:124] >       },
	I0813 20:24:50.283274    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.283281    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.283296    4908 command_runner.go:124] >     },
	I0813 20:24:50.283303    4908 command_runner.go:124] >     {
	I0813 20:24:50.283314    4908 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0813 20:24:50.283325    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.283333    4908 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0813 20:24:50.283343    4908 command_runner.go:124] >       ],
	I0813 20:24:50.283351    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.283364    4908 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0813 20:24:50.283380    4908 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0813 20:24:50.283390    4908 command_runner.go:124] >       ],
	I0813 20:24:50.283403    4908 command_runner.go:124] >       "size": "689817",
	I0813 20:24:50.283415    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.283428    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.283438    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.283444    4908 command_runner.go:124] >     }
	I0813 20:24:50.283454    4908 command_runner.go:124] >   ]
	I0813 20:24:50.283460    4908 command_runner.go:124] > }
	I0813 20:24:50.283650    4908 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:24:50.283668    4908 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:24:50.283768    4908 ssh_runner.go:149] Run: crio config
	I0813 20:24:50.543400    4908 command_runner.go:124] ! time="2021-08-13T20:24:50Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 20:24:50.545073    4908 command_runner.go:124] ! time="2021-08-13T20:24:50Z" level=warning msg="The 'registries' option in crio.conf(5) (referenced in \"/etc/crio/crio.conf\") has been deprecated and will be removed with CRI-O 1.21."
	I0813 20:24:50.545101    4908 command_runner.go:124] ! time="2021-08-13T20:24:50Z" level=warning msg="Please refer to containers-registries.conf(5) for configuring unqualified-search registries."
	I0813 20:24:50.547870    4908 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0813 20:24:50.550327    4908 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0813 20:24:50.550345    4908 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0813 20:24:50.550352    4908 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0813 20:24:50.550362    4908 command_runner.go:124] > #
	I0813 20:24:50.550386    4908 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0813 20:24:50.550400    4908 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0813 20:24:50.550410    4908 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0813 20:24:50.550420    4908 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0813 20:24:50.550424    4908 command_runner.go:124] > # reload'.
	I0813 20:24:50.550431    4908 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0813 20:24:50.550440    4908 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0813 20:24:50.550447    4908 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0813 20:24:50.550454    4908 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0813 20:24:50.550457    4908 command_runner.go:124] > [crio]
	I0813 20:24:50.550464    4908 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0813 20:24:50.550475    4908 command_runner.go:124] > # containers images, in this directory.
	I0813 20:24:50.550484    4908 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0813 20:24:50.550498    4908 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0813 20:24:50.550509    4908 command_runner.go:124] > #runroot = "/var/run/containers/storage"
	I0813 20:24:50.550534    4908 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0813 20:24:50.550543    4908 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0813 20:24:50.550549    4908 command_runner.go:124] > #storage_driver = "overlay"
	I0813 20:24:50.550558    4908 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0813 20:24:50.550569    4908 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0813 20:24:50.550577    4908 command_runner.go:124] > #storage_option = [
	I0813 20:24:50.550582    4908 command_runner.go:124] > #]
	I0813 20:24:50.550594    4908 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0813 20:24:50.550607    4908 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0813 20:24:50.550618    4908 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0813 20:24:50.550625    4908 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0813 20:24:50.550634    4908 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0813 20:24:50.550643    4908 command_runner.go:124] > # always happen on a node reboot
	I0813 20:24:50.550649    4908 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0813 20:24:50.550655    4908 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0813 20:24:50.550668    4908 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0813 20:24:50.550692    4908 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0813 20:24:50.550724    4908 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0813 20:24:50.550737    4908 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0813 20:24:50.550741    4908 command_runner.go:124] > [crio.api]
	I0813 20:24:50.550747    4908 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0813 20:24:50.550752    4908 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0813 20:24:50.550758    4908 command_runner.go:124] > # IP address on which the stream server will listen.
	I0813 20:24:50.550768    4908 command_runner.go:124] > stream_address = "127.0.0.1"
	I0813 20:24:50.550780    4908 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0813 20:24:50.550788    4908 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0813 20:24:50.550798    4908 command_runner.go:124] > stream_port = "0"
	I0813 20:24:50.550807    4908 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0813 20:24:50.550816    4908 command_runner.go:124] > stream_enable_tls = false
	I0813 20:24:50.550826    4908 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0813 20:24:50.550842    4908 command_runner.go:124] > stream_idle_timeout = ""
	I0813 20:24:50.550869    4908 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0813 20:24:50.550886    4908 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0813 20:24:50.550892    4908 command_runner.go:124] > # minutes.
	I0813 20:24:50.550898    4908 command_runner.go:124] > stream_tls_cert = ""
	I0813 20:24:50.550908    4908 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0813 20:24:50.550927    4908 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0813 20:24:50.550938    4908 command_runner.go:124] > stream_tls_key = ""
	I0813 20:24:50.550950    4908 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0813 20:24:50.550963    4908 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0813 20:24:50.550972    4908 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0813 20:24:50.550977    4908 command_runner.go:124] > stream_tls_ca = ""
	I0813 20:24:50.550993    4908 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 20:24:50.551003    4908 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0813 20:24:50.551015    4908 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 20:24:50.551025    4908 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0813 20:24:50.551039    4908 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0813 20:24:50.551051    4908 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0813 20:24:50.551060    4908 command_runner.go:124] > [crio.runtime]
	I0813 20:24:50.551072    4908 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0813 20:24:50.551081    4908 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0813 20:24:50.551090    4908 command_runner.go:124] > # "nofile=1024:2048"
	I0813 20:24:50.551105    4908 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0813 20:24:50.551115    4908 command_runner.go:124] > #default_ulimits = [
	I0813 20:24:50.551124    4908 command_runner.go:124] > #]
	I0813 20:24:50.551146    4908 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0813 20:24:50.551157    4908 command_runner.go:124] > no_pivot = false
	I0813 20:24:50.551164    4908 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0813 20:24:50.551182    4908 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0813 20:24:50.551193    4908 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0813 20:24:50.551206    4908 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0813 20:24:50.551219    4908 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0813 20:24:50.551230    4908 command_runner.go:124] > conmon = "/usr/libexec/crio/conmon"
	I0813 20:24:50.551240    4908 command_runner.go:124] > # Cgroup setting for conmon
	I0813 20:24:50.551250    4908 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0813 20:24:50.551259    4908 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0813 20:24:50.551264    4908 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0813 20:24:50.551273    4908 command_runner.go:124] > conmon_env = [
	I0813 20:24:50.551283    4908 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0813 20:24:50.551289    4908 command_runner.go:124] > ]
	I0813 20:24:50.551298    4908 command_runner.go:124] > # Additional environment variables to set for all the
	I0813 20:24:50.551308    4908 command_runner.go:124] > # containers. These are overridden if set in the
	I0813 20:24:50.551320    4908 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0813 20:24:50.551329    4908 command_runner.go:124] > default_env = [
	I0813 20:24:50.551333    4908 command_runner.go:124] > ]
	I0813 20:24:50.551344    4908 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0813 20:24:50.551353    4908 command_runner.go:124] > selinux = false
	I0813 20:24:50.551363    4908 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0813 20:24:50.551373    4908 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0813 20:24:50.551384    4908 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0813 20:24:50.551393    4908 command_runner.go:124] > seccomp_profile = ""
	I0813 20:24:50.551402    4908 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0813 20:24:50.551416    4908 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0813 20:24:50.551431    4908 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0813 20:24:50.551441    4908 command_runner.go:124] > # which might increase security.
	I0813 20:24:50.551449    4908 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0813 20:24:50.551461    4908 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0813 20:24:50.551471    4908 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0813 20:24:50.551482    4908 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0813 20:24:50.551496    4908 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0813 20:24:50.551506    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:24:50.551520    4908 command_runner.go:124] > apparmor_profile = "crio-default"
	I0813 20:24:50.551534    4908 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0813 20:24:50.551544    4908 command_runner.go:124] > # irqbalance daemon.
	I0813 20:24:50.551562    4908 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0813 20:24:50.551573    4908 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0813 20:24:50.551582    4908 command_runner.go:124] > cgroup_manager = "systemd"
	I0813 20:24:50.551594    4908 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0813 20:24:50.551601    4908 command_runner.go:124] > separate_pull_cgroup = ""
	I0813 20:24:50.551619    4908 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0813 20:24:50.551635    4908 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0813 20:24:50.551644    4908 command_runner.go:124] > # will be added.
	I0813 20:24:50.551648    4908 command_runner.go:124] > default_capabilities = [
	I0813 20:24:50.551653    4908 command_runner.go:124] > 	"CHOWN",
	I0813 20:24:50.551658    4908 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0813 20:24:50.551666    4908 command_runner.go:124] > 	"FSETID",
	I0813 20:24:50.551672    4908 command_runner.go:124] > 	"FOWNER",
	I0813 20:24:50.551679    4908 command_runner.go:124] > 	"SETGID",
	I0813 20:24:50.551685    4908 command_runner.go:124] > 	"SETUID",
	I0813 20:24:50.551692    4908 command_runner.go:124] > 	"SETPCAP",
	I0813 20:24:50.551698    4908 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0813 20:24:50.551707    4908 command_runner.go:124] > 	"KILL",
	I0813 20:24:50.551712    4908 command_runner.go:124] > ]
	I0813 20:24:50.551723    4908 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0813 20:24:50.551735    4908 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 20:24:50.551744    4908 command_runner.go:124] > default_sysctls = [
	I0813 20:24:50.551749    4908 command_runner.go:124] > ]
	I0813 20:24:50.551758    4908 command_runner.go:124] > # List of additional devices. specified as
	I0813 20:24:50.551773    4908 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0813 20:24:50.551784    4908 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0813 20:24:50.551797    4908 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 20:24:50.551806    4908 command_runner.go:124] > additional_devices = [
	I0813 20:24:50.551811    4908 command_runner.go:124] > ]
	I0813 20:24:50.551820    4908 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0813 20:24:50.551827    4908 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0813 20:24:50.551834    4908 command_runner.go:124] > hooks_dir = [
	I0813 20:24:50.551842    4908 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0813 20:24:50.551850    4908 command_runner.go:124] > ]
	I0813 20:24:50.551860    4908 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0813 20:24:50.551874    4908 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0813 20:24:50.551885    4908 command_runner.go:124] > # its default mounts from the following two files:
	I0813 20:24:50.551893    4908 command_runner.go:124] > #
	I0813 20:24:50.551903    4908 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0813 20:24:50.551922    4908 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0813 20:24:50.551932    4908 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0813 20:24:50.551940    4908 command_runner.go:124] > #
	I0813 20:24:50.551950    4908 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0813 20:24:50.551963    4908 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0813 20:24:50.551977    4908 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0813 20:24:50.551988    4908 command_runner.go:124] > #      only add mounts it finds in this file.
	I0813 20:24:50.551995    4908 command_runner.go:124] > #
	I0813 20:24:50.552001    4908 command_runner.go:124] > #default_mounts_file = ""
	I0813 20:24:50.552010    4908 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0813 20:24:50.552019    4908 command_runner.go:124] > pids_limit = 1024
	I0813 20:24:50.552034    4908 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0813 20:24:50.552047    4908 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0813 20:24:50.552060    4908 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0813 20:24:50.552072    4908 command_runner.go:124] > # limit is never exceeded.
	I0813 20:24:50.552082    4908 command_runner.go:124] > log_size_max = -1
	I0813 20:24:50.552144    4908 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0813 20:24:50.552155    4908 command_runner.go:124] > log_to_journald = false
	I0813 20:24:50.552165    4908 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0813 20:24:50.552174    4908 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0813 20:24:50.552186    4908 command_runner.go:124] > # Path to directory for container attach sockets.
	I0813 20:24:50.552196    4908 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0813 20:24:50.552207    4908 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0813 20:24:50.552214    4908 command_runner.go:124] > bind_mount_prefix = ""
	I0813 20:24:50.552225    4908 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0813 20:24:50.552232    4908 command_runner.go:124] > read_only = false
	I0813 20:24:50.552240    4908 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0813 20:24:50.552254    4908 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0813 20:24:50.552263    4908 command_runner.go:124] > # live configuration reload.
	I0813 20:24:50.552269    4908 command_runner.go:124] > log_level = "info"
	I0813 20:24:50.552280    4908 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0813 20:24:50.552292    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:24:50.552304    4908 command_runner.go:124] > log_filter = ""
	I0813 20:24:50.552317    4908 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0813 20:24:50.552330    4908 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0813 20:24:50.552337    4908 command_runner.go:124] > # separated by comma.
	I0813 20:24:50.552340    4908 command_runner.go:124] > uid_mappings = ""
	I0813 20:24:50.552350    4908 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0813 20:24:50.552364    4908 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0813 20:24:50.552380    4908 command_runner.go:124] > # separated by comma.
	I0813 20:24:50.552390    4908 command_runner.go:124] > gid_mappings = ""
	I0813 20:24:50.552400    4908 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0813 20:24:50.552412    4908 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0813 20:24:50.552424    4908 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0813 20:24:50.552430    4908 command_runner.go:124] > ctr_stop_timeout = 30
	I0813 20:24:50.552437    4908 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0813 20:24:50.552447    4908 command_runner.go:124] > # and manage their lifecycle.
	I0813 20:24:50.552459    4908 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0813 20:24:50.552469    4908 command_runner.go:124] > manage_ns_lifecycle = true
	I0813 20:24:50.552480    4908 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0813 20:24:50.552494    4908 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0813 20:24:50.552505    4908 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0813 20:24:50.552519    4908 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0813 20:24:50.552526    4908 command_runner.go:124] > drop_infra_ctr = false
	I0813 20:24:50.552535    4908 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0813 20:24:50.552547    4908 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0813 20:24:50.552562    4908 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0813 20:24:50.552572    4908 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0813 20:24:50.552582    4908 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0813 20:24:50.552590    4908 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0813 20:24:50.552597    4908 command_runner.go:124] > namespaces_dir = "/var/run"
	I0813 20:24:50.552607    4908 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0813 20:24:50.552611    4908 command_runner.go:124] > pinns_path = "/usr/bin/pinns"
	I0813 20:24:50.552619    4908 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0813 20:24:50.552629    4908 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0813 20:24:50.552639    4908 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0813 20:24:50.552646    4908 command_runner.go:124] > default_runtime = "runc"
	I0813 20:24:50.552658    4908 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0813 20:24:50.552669    4908 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0813 20:24:50.552679    4908 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0813 20:24:50.552688    4908 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0813 20:24:50.552692    4908 command_runner.go:124] > #
	I0813 20:24:50.552697    4908 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0813 20:24:50.552702    4908 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0813 20:24:50.552708    4908 command_runner.go:124] > #  runtime_type = "oci"
	I0813 20:24:50.552715    4908 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0813 20:24:50.552723    4908 command_runner.go:124] > #  privileged_without_host_devices = false
	I0813 20:24:50.552729    4908 command_runner.go:124] > #  allowed_annotations = []
	I0813 20:24:50.552740    4908 command_runner.go:124] > # Where:
	I0813 20:24:50.552748    4908 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0813 20:24:50.552758    4908 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0813 20:24:50.552772    4908 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0813 20:24:50.552783    4908 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0813 20:24:50.552789    4908 command_runner.go:124] > #   in $PATH.
	I0813 20:24:50.552798    4908 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0813 20:24:50.552810    4908 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0813 20:24:50.552821    4908 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0813 20:24:50.552832    4908 command_runner.go:124] > #   state.
	I0813 20:24:50.552845    4908 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0813 20:24:50.552857    4908 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0813 20:24:50.552871    4908 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0813 20:24:50.552883    4908 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0813 20:24:50.552893    4908 command_runner.go:124] > #   The currently recognized values are:
	I0813 20:24:50.552905    4908 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0813 20:24:50.552918    4908 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0813 20:24:50.552930    4908 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0813 20:24:50.552940    4908 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0813 20:24:50.552948    4908 command_runner.go:124] > runtime_path = "/usr/bin/runc"
	I0813 20:24:50.552957    4908 command_runner.go:124] > runtime_type = "oci"
	I0813 20:24:50.552963    4908 command_runner.go:124] > runtime_root = "/run/runc"
	I0813 20:24:50.552973    4908 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0813 20:24:50.552979    4908 command_runner.go:124] > # running containers
	I0813 20:24:50.552987    4908 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0813 20:24:50.552999    4908 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0813 20:24:50.553012    4908 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0813 20:24:50.553024    4908 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0813 20:24:50.553035    4908 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0813 20:24:50.553045    4908 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0813 20:24:50.553051    4908 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0813 20:24:50.553057    4908 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0813 20:24:50.553064    4908 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0813 20:24:50.553074    4908 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0813 20:24:50.553086    4908 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0813 20:24:50.553095    4908 command_runner.go:124] > #
	I0813 20:24:50.553111    4908 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0813 20:24:50.553124    4908 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0813 20:24:50.553136    4908 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0813 20:24:50.553151    4908 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0813 20:24:50.553164    4908 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0813 20:24:50.553173    4908 command_runner.go:124] > [crio.image]
	I0813 20:24:50.553183    4908 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0813 20:24:50.553193    4908 command_runner.go:124] > default_transport = "docker://"
	I0813 20:24:50.553203    4908 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0813 20:24:50.553216    4908 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0813 20:24:50.553227    4908 command_runner.go:124] > global_auth_file = ""
	I0813 20:24:50.553235    4908 command_runner.go:124] > # The image used to instantiate infra containers.
	I0813 20:24:50.553244    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:24:50.553254    4908 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0813 20:24:50.553265    4908 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0813 20:24:50.553278    4908 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0813 20:24:50.553290    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:24:50.553297    4908 command_runner.go:124] > pause_image_auth_file = ""
	I0813 20:24:50.553309    4908 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0813 20:24:50.553321    4908 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0813 20:24:50.553330    4908 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0813 20:24:50.553341    4908 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0813 20:24:50.553352    4908 command_runner.go:124] > pause_command = "/pause"
	I0813 20:24:50.553362    4908 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0813 20:24:50.553376    4908 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0813 20:24:50.553389    4908 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0813 20:24:50.553410    4908 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0813 20:24:50.553421    4908 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0813 20:24:50.553426    4908 command_runner.go:124] > signature_policy = ""
	I0813 20:24:50.553434    4908 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0813 20:24:50.553447    4908 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0813 20:24:50.553457    4908 command_runner.go:124] > # changing them here.
	I0813 20:24:50.553465    4908 command_runner.go:124] > #insecure_registries = "[]"
	I0813 20:24:50.553477    4908 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0813 20:24:50.553488    4908 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0813 20:24:50.553498    4908 command_runner.go:124] > image_volumes = "mkdir"
	I0813 20:24:50.553506    4908 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0813 20:24:50.553519    4908 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0813 20:24:50.553533    4908 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0813 20:24:50.553551    4908 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0813 20:24:50.553562    4908 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0813 20:24:50.553569    4908 command_runner.go:124] > #registries = [
	I0813 20:24:50.553704    4908 command_runner.go:124] > # 	"docker.io",
	I0813 20:24:50.554196    4908 command_runner.go:124] > #]
	I0813 20:24:50.554221    4908 command_runner.go:124] > # Temporary directory to use for storing big files
	I0813 20:24:50.554230    4908 command_runner.go:124] > big_files_temporary_dir = ""
	I0813 20:24:50.554250    4908 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0813 20:24:50.554256    4908 command_runner.go:124] > # CNI plugins.
	I0813 20:24:50.554270    4908 command_runner.go:124] > [crio.network]
	I0813 20:24:50.554284    4908 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0813 20:24:50.554301    4908 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0813 20:24:50.554311    4908 command_runner.go:124] > # cni_default_network = "kindnet"
	I0813 20:24:50.554328    4908 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0813 20:24:50.554341    4908 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0813 20:24:50.554354    4908 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0813 20:24:50.554385    4908 command_runner.go:124] > plugin_dirs = [
	I0813 20:24:50.554395    4908 command_runner.go:124] > 	"/opt/cni/bin/",
	I0813 20:24:50.554400    4908 command_runner.go:124] > ]
	I0813 20:24:50.554420    4908 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0813 20:24:50.554426    4908 command_runner.go:124] > [crio.metrics]
	I0813 20:24:50.554434    4908 command_runner.go:124] > # Globally enable or disable metrics support.
	I0813 20:24:50.554446    4908 command_runner.go:124] > enable_metrics = true
	I0813 20:24:50.554459    4908 command_runner.go:124] > # The port on which the metrics server will listen.
	I0813 20:24:50.554468    4908 command_runner.go:124] > metrics_port = 9090
	I0813 20:24:50.554516    4908 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0813 20:24:50.554529    4908 command_runner.go:124] > metrics_socket = ""
	I0813 20:24:50.554605    4908 cni.go:93] Creating CNI manager for ""
	I0813 20:24:50.554620    4908 cni.go:154] 1 nodes found, recommending kindnet
	I0813 20:24:50.554631    4908 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:24:50.554646    4908 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.64 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210813202419-30853 NodeName:multinode-20210813202419-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.64 CgroupDriver:systemd ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:24:50.554827    4908 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210813202419-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:24:50.555250    4908 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=multinode-20210813202419-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.64 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813202419-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:24:50.555317    4908 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:24:50.563047    4908 command_runner.go:124] > kubeadm
	I0813 20:24:50.563059    4908 command_runner.go:124] > kubectl
	I0813 20:24:50.563063    4908 command_runner.go:124] > kubelet
	I0813 20:24:50.563439    4908 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:24:50.563508    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:24:50.570579    4908 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (510 bytes)
	I0813 20:24:50.582471    4908 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:24:50.594516    4908 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I0813 20:24:50.606196    4908 ssh_runner.go:149] Run: grep 192.168.39.64	control-plane.minikube.internal$ /etc/hosts
	I0813 20:24:50.610130    4908 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:24:50.620699    4908 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853 for IP: 192.168.39.64
	I0813 20:24:50.620757    4908 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:24:50.620779    4908 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:24:50.620826    4908 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.key
	I0813 20:24:50.620843    4908 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.crt with IP's: []
	I0813 20:24:51.344548    4908 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.crt ...
	I0813 20:24:51.344584    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.crt: {Name:mka1ee370ee925eb7e1501675df2e7ea7e3c224f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:24:51.344790    4908 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.key ...
	I0813 20:24:51.344808    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.key: {Name:mk15171e52061b2035f15d5434f676c84c199eb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:24:51.344899    4908 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.key.b878b390
	I0813 20:24:51.344911    4908 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.crt.b878b390 with IP's: [192.168.39.64 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:24:51.395270    4908 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.crt.b878b390 ...
	I0813 20:24:51.395299    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.crt.b878b390: {Name:mk754558e7785498ea66501b23a82045536b3325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:24:51.395463    4908 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.key.b878b390 ...
	I0813 20:24:51.395475    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.key.b878b390: {Name:mke02b58094314a18ba8e3f83b2c71c941e182ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:24:51.395554    4908 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.crt.b878b390 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.crt
	I0813 20:24:51.395662    4908 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.key.b878b390 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.key
	I0813 20:24:51.395724    4908 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.key
	I0813 20:24:51.395732    4908 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.crt with IP's: []
	I0813 20:24:51.652494    4908 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.crt ...
	I0813 20:24:51.652528    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.crt: {Name:mk684c6e1d6b08d32f001fa3d1e79a30161eb9d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:24:51.652711    4908 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.key ...
	I0813 20:24:51.652726    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.key: {Name:mkcbcdb7473a58f8d14621a8dede511c86f24c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:24:51.652806    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0813 20:24:51.652822    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0813 20:24:51.652831    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0813 20:24:51.652840    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0813 20:24:51.652852    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0813 20:24:51.652862    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0813 20:24:51.652874    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0813 20:24:51.652884    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0813 20:24:51.652932    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 20:24:51.652967    4908 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 20:24:51.652982    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:24:51.653007    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:24:51.653030    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:24:51.653054    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:24:51.653096    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:24:51.653123    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:24:51.653136    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem -> /usr/share/ca-certificates/30853.pem
	I0813 20:24:51.653145    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> /usr/share/ca-certificates/308532.pem
	I0813 20:24:51.653944    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:24:51.674456    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:24:51.692244    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:24:51.709106    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:24:51.725960    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:24:51.742359    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:24:51.758198    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:24:51.774580    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:24:51.790864    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:24:51.806953    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 20:24:51.824402    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 20:24:51.841914    4908 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:24:51.853430    4908 ssh_runner.go:149] Run: openssl version
	I0813 20:24:51.858510    4908 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0813 20:24:51.859107    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:24:51.866399    4908 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:24:51.870523    4908 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:24:51.870935    4908 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:24:51.870979    4908 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:24:51.876366    4908 command_runner.go:124] > b5213941
	I0813 20:24:51.876428    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:24:51.885234    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 20:24:51.892403    4908 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 20:24:51.896812    4908 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:24:51.896833    4908 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:24:51.896864    4908 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 20:24:51.901946    4908 command_runner.go:124] > 51391683
	I0813 20:24:51.902276    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 20:24:51.909497    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 20:24:51.916917    4908 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 20:24:51.921077    4908 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:24:51.921328    4908 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:24:51.921381    4908 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 20:24:51.926815    4908 command_runner.go:124] > 3ec20f2e
	I0813 20:24:51.926937    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:24:51.934097    4908 kubeadm.go:390] StartCluster: {Name:multinode-20210813202419-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3
ClusterName:multinode-20210813202419-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 20:24:51.934172    4908 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:24:51.934204    4908 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:24:51.967438    4908 cri.go:76] found id: ""
	I0813 20:24:51.967505    4908 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:24:51.974346    4908 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0813 20:24:51.974372    4908 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0813 20:24:51.974383    4908 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0813 20:24:51.974493    4908 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:24:51.980818    4908 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:24:51.987175    4908 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0813 20:24:51.987198    4908 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0813 20:24:51.987206    4908 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0813 20:24:51.987214    4908 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:24:51.987242    4908 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:24:51.987277    4908 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 20:24:52.150812    4908 command_runner.go:124] > [init] Using Kubernetes version: v1.21.3
	I0813 20:24:52.150903    4908 command_runner.go:124] > [preflight] Running pre-flight checks
	I0813 20:24:52.477726    4908 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0813 20:24:52.477866    4908 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0813 20:24:52.478009    4908 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0813 20:24:52.705047    4908 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0813 20:24:52.802691    4908 out.go:204]   - Generating certificates and keys ...
	I0813 20:24:52.802809    4908 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0813 20:24:52.802913    4908 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0813 20:24:52.894346    4908 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0813 20:24:53.087905    4908 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key
	I0813 20:24:53.191891    4908 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key
	I0813 20:24:53.331526    4908 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key
	I0813 20:24:53.677101    4908 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key
	I0813 20:24:53.678093    4908 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20210813202419-30853] and IPs [192.168.39.64 127.0.0.1 ::1]
	I0813 20:24:53.787935    4908 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key
	I0813 20:24:53.788292    4908 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20210813202419-30853] and IPs [192.168.39.64 127.0.0.1 ::1]
	I0813 20:24:54.106364    4908 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0813 20:24:54.223555    4908 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0813 20:24:54.306956    4908 command_runner.go:124] > [certs] Generating "sa" key and public key
	I0813 20:24:54.307322    4908 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0813 20:24:54.491373    4908 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0813 20:24:54.683168    4908 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0813 20:24:54.799101    4908 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0813 20:24:54.906630    4908 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0813 20:24:54.932231    4908 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 20:24:54.933513    4908 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 20:24:54.933570    4908 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0813 20:24:55.105952    4908 out.go:204]   - Booting up control plane ...
	I0813 20:24:55.104142    4908 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0813 20:24:55.106083    4908 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0813 20:24:55.118658    4908 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0813 20:24:55.119663    4908 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0813 20:24:55.120478    4908 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0813 20:24:55.128338    4908 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0813 20:25:11.128024    4908 command_runner.go:124] > [apiclient] All control plane components are healthy after 16.005267 seconds
	I0813 20:25:11.128159    4908 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0813 20:25:11.167559    4908 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
	I0813 20:25:11.706952    4908 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs
	I0813 20:25:11.707291    4908 command_runner.go:124] > [mark-control-plane] Marking the node multinode-20210813202419-30853 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0813 20:25:12.221982    4908 out.go:204]   - Configuring RBAC rules ...
	I0813 20:25:12.220526    4908 command_runner.go:124] > [bootstrap-token] Using token: 6rribx.g18moxouefc7yp35
	I0813 20:25:12.222112    4908 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0813 20:25:12.230255    4908 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0813 20:25:12.248069    4908 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0813 20:25:12.255826    4908 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0813 20:25:12.259793    4908 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0813 20:25:12.265826    4908 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0813 20:25:12.283294    4908 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0813 20:25:12.719703    4908 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0813 20:25:12.786682    4908 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0813 20:25:12.791765    4908 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully!
	I0813 20:25:12.791863    4908 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user:
	I0813 20:25:12.791903    4908 command_runner.go:124] >   mkdir -p $HOME/.kube
	I0813 20:25:12.792048    4908 command_runner.go:124] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0813 20:25:12.792141    4908 command_runner.go:124] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0813 20:25:12.792238    4908 command_runner.go:124] > Alternatively, if you are the root user, you can run:
	I0813 20:25:12.792285    4908 command_runner.go:124] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0813 20:25:12.792345    4908 command_runner.go:124] > You should now deploy a pod network to the cluster.
	I0813 20:25:12.792452    4908 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0813 20:25:12.792543    4908 command_runner.go:124] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0813 20:25:12.792683    4908 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities
	I0813 20:25:12.792749    4908 command_runner.go:124] > and service account keys on each node and then running the following as root:
	I0813 20:25:12.792821    4908 command_runner.go:124] >   kubeadm join control-plane.minikube.internal:8443 --token 6rribx.g18moxouefc7yp35 \
	I0813 20:25:12.792922    4908 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:00d93bc1122e8abafdd2223d172c3617c6ca5e75fcbdac147810f69b6f47ae9b \
	I0813 20:25:12.792957    4908 command_runner.go:124] > 	--control-plane 
	I0813 20:25:12.793052    4908 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root:
	I0813 20:25:12.793126    4908 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token 6rribx.g18moxouefc7yp35 \
	I0813 20:25:12.793257    4908 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:00d93bc1122e8abafdd2223d172c3617c6ca5e75fcbdac147810f69b6f47ae9b 
	I0813 20:25:12.797355    4908 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 20:25:12.798324    4908 cni.go:93] Creating CNI manager for ""
	I0813 20:25:12.798352    4908 cni.go:154] 1 nodes found, recommending kindnet
	I0813 20:25:12.800148    4908 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:25:12.800219    4908 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:25:12.809588    4908 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0813 20:25:12.809625    4908 command_runner.go:124] >   Size: 2853400   	Blocks: 5576       IO Block: 4096   regular file
	I0813 20:25:12.809633    4908 command_runner.go:124] > Device: 10h/16d	Inode: 22875       Links: 1
	I0813 20:25:12.809644    4908 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 20:25:12.809657    4908 command_runner.go:124] > Access: 2021-08-13 20:24:33.325091489 +0000
	I0813 20:25:12.809670    4908 command_runner.go:124] > Modify: 2021-08-10 20:02:08.000000000 +0000
	I0813 20:25:12.809678    4908 command_runner.go:124] > Change: 2021-08-13 20:24:29.381091489 +0000
	I0813 20:25:12.809682    4908 command_runner.go:124] >  Birth: -
	I0813 20:25:12.809726    4908 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:25:12.809739    4908 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:25:12.826189    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:25:13.371212    4908 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0813 20:25:13.371243    4908 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0813 20:25:13.371270    4908 command_runner.go:124] > serviceaccount/kindnet created
	I0813 20:25:13.371277    4908 command_runner.go:124] > daemonset.apps/kindnet created
	I0813 20:25:13.372077    4908 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:25:13.372152    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:13.372165    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=multinode-20210813202419-30853 minikube.k8s.io/updated_at=2021_08_13T20_25_13_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:13.398902    4908 command_runner.go:124] > -16
	I0813 20:25:13.543665    4908 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0813 20:25:13.543745    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:13.543798    4908 command_runner.go:124] > node/multinode-20210813202419-30853 labeled
	I0813 20:25:13.543837    4908 ops.go:34] apiserver oom_adj: -16
	I0813 20:25:13.644100    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:14.144960    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:14.251465    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:14.644917    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:14.758484    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:15.144974    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:15.251636    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:15.644645    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:15.761083    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:16.144985    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:16.262175    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:16.644790    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:16.751945    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:17.144514    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:17.243537    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:17.644808    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:17.750454    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:18.145181    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:18.457392    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:18.644918    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:18.769135    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:19.145018    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:19.264331    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:19.644391    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:19.756377    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:20.145045    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:20.256966    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:20.644536    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:20.754430    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:21.145141    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:21.253463    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:21.645101    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:21.756729    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:22.145182    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:22.243392    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:22.645133    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:22.748965    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:23.145058    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:23.248005    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:23.645177    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:23.833689    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:24.144894    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:24.334176    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:24.644535    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:24.755823    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:25.144379    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:25.416222    4908 command_runner.go:124] > NAME      SECRETS   AGE
	I0813 20:25:25.416248    4908 command_runner.go:124] > default   1         0s
	I0813 20:25:25.416271    4908 kubeadm.go:985] duration metric: took 12.04420308s to wait for elevateKubeSystemPrivileges.
	I0813 20:25:25.416286    4908 kubeadm.go:392] StartCluster complete in 33.482194053s
	I0813 20:25:25.416307    4908 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:25:25.416448    4908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:25:25.417295    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:25:25.417861    4908 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:25:25.418155    4908 kapi.go:59] client config for multinode-20210813202419-30853: &rest.Config{Host:"https://192.168.39.64:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-308
53/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:25:25.418644    4908 cert_rotation.go:137] Starting client certificate rotation controller
	I0813 20:25:25.419820    4908 round_trippers.go:432] GET https://192.168.39.64:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 20:25:25.419836    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:25.419841    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:25.419844    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:25.429297    4908 round_trippers.go:457] Response Status: 200 OK in 9 milliseconds
	I0813 20:25:25.429313    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:25.429319    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:25.429324    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:25.429329    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:25.429333    4908 round_trippers.go:463]     Content-Length: 291
	I0813 20:25:25.429338    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:25 GMT
	I0813 20:25:25.429342    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:25.429366    4908 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"73e8c4a8-95d5-4c3a-b449-cc0cea21354a","resourceVersion":"440","creationTimestamp":"2021-08-13T20:25:12Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0813 20:25:25.430157    4908 request.go:1123] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"73e8c4a8-95d5-4c3a-b449-cc0cea21354a","resourceVersion":"440","creationTimestamp":"2021-08-13T20:25:12Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0813 20:25:25.430214    4908 round_trippers.go:432] PUT https://192.168.39.64:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 20:25:25.430233    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:25.430240    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:25.430247    4908 round_trippers.go:442]     Content-Type: application/json
	I0813 20:25:25.430254    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:25.434776    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:25.434791    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:25.434796    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:25.434799    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:25.434802    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:25.434805    4908 round_trippers.go:463]     Content-Length: 291
	I0813 20:25:25.434808    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:25 GMT
	I0813 20:25:25.434811    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:25.434825    4908 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"73e8c4a8-95d5-4c3a-b449-cc0cea21354a","resourceVersion":"442","creationTimestamp":"2021-08-13T20:25:12Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0813 20:25:25.935652    4908 round_trippers.go:432] GET https://192.168.39.64:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 20:25:25.935679    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:25.935690    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:25.935694    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:25.939785    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:25.939802    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:25.939806    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:25.939809    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:25.939812    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:25.939815    4908 round_trippers.go:463]     Content-Length: 291
	I0813 20:25:25.939818    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:25 GMT
	I0813 20:25:25.939821    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:25.939839    4908 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"73e8c4a8-95d5-4c3a-b449-cc0cea21354a","resourceVersion":"453","creationTimestamp":"2021-08-13T20:25:12Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0813 20:25:25.939926    4908 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20210813202419-30853" rescaled to 1
	I0813 20:25:25.939976    4908 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:25:25.939985    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:25:25.951751    4908 out.go:177] * Verifying Kubernetes components...
	I0813 20:25:25.940078    4908 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:25:25.940215    4908 config.go:177] Loaded profile config "multinode-20210813202419-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:25:25.951839    4908 addons.go:59] Setting storage-provisioner=true in profile "multinode-20210813202419-30853"
	I0813 20:25:25.951869    4908 addons.go:135] Setting addon storage-provisioner=true in "multinode-20210813202419-30853"
	W0813 20:25:25.951876    4908 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:25:25.951830    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:25:25.951911    4908 host.go:66] Checking if "multinode-20210813202419-30853" exists ...
	I0813 20:25:25.951848    4908 addons.go:59] Setting default-storageclass=true in profile "multinode-20210813202419-30853"
	I0813 20:25:25.951970    4908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210813202419-30853"
	I0813 20:25:25.952434    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:25:25.952445    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:25:25.952485    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:25:25.952523    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:25:25.963548    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0813 20:25:25.963992    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:25:25.964452    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:25:25.964475    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:25:25.964826    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:25:25.964990    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetState
	I0813 20:25:25.968250    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41769
	I0813 20:25:25.968666    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:25:25.969156    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:25:25.969185    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:25:25.969370    4908 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:25:25.969539    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:25:25.969663    4908 kapi.go:59] client config for multinode-20210813202419-30853: &rest.Config{Host:"https://192.168.39.64:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-308
53/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:25:25.969995    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:25:25.970033    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:25:25.971099    4908 round_trippers.go:432] GET https://192.168.39.64:8443/apis/storage.k8s.io/v1/storageclasses
	I0813 20:25:25.971114    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:25.971123    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:25.971129    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:25.975871    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:25.975885    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:25.975889    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:25.975893    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:25.975902    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:25.975909    4908 round_trippers.go:463]     Content-Length: 109
	I0813 20:25:25.975914    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:25 GMT
	I0813 20:25:25.975919    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:25.975989    4908 request.go:1123] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"453"},"items":[]}
	I0813 20:25:25.976521    4908 addons.go:135] Setting addon default-storageclass=true in "multinode-20210813202419-30853"
	W0813 20:25:25.976538    4908 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:25:25.976567    4908 host.go:66] Checking if "multinode-20210813202419-30853" exists ...
	I0813 20:25:25.976849    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:25:25.976891    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:25:25.980405    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36775
	I0813 20:25:25.980839    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:25:25.981331    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:25:25.981353    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:25:25.981816    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:25:25.982000    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetState
	I0813 20:25:25.984919    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:25:25.986829    4908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:25:25.986985    4908 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:25:25.987002    4908 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:25:25.987020    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:25:25.987783    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36583
	I0813 20:25:25.988196    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:25:25.988659    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:25:25.988682    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:25:25.988993    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:25:25.989553    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:25:25.989603    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:25:25.992706    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:25:25.993085    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:25:25.993116    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:25:25.993249    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:25:25.993401    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:25:25.993524    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:25:25.993634    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:25:26.000649    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34563
	I0813 20:25:26.001002    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:25:26.001394    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:25:26.001415    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:25:26.001723    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:25:26.001896    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetState
	I0813 20:25:26.004525    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:25:26.004735    4908 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:25:26.004755    4908 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:25:26.004773    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:25:26.010190    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:25:26.010600    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:25:26.010635    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:25:26.010752    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:25:26.010913    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:25:26.011060    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:25:26.011187    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:25:26.233408    4908 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:25:26.250861    4908 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:25:26.266369    4908 command_runner.go:124] > apiVersion: v1
	I0813 20:25:26.266391    4908 command_runner.go:124] > data:
	I0813 20:25:26.266396    4908 command_runner.go:124] >   Corefile: |
	I0813 20:25:26.266399    4908 command_runner.go:124] >     .:53 {
	I0813 20:25:26.266403    4908 command_runner.go:124] >         errors
	I0813 20:25:26.266408    4908 command_runner.go:124] >         health {
	I0813 20:25:26.266413    4908 command_runner.go:124] >            lameduck 5s
	I0813 20:25:26.266417    4908 command_runner.go:124] >         }
	I0813 20:25:26.266421    4908 command_runner.go:124] >         ready
	I0813 20:25:26.266428    4908 command_runner.go:124] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0813 20:25:26.266438    4908 command_runner.go:124] >            pods insecure
	I0813 20:25:26.266445    4908 command_runner.go:124] >            fallthrough in-addr.arpa ip6.arpa
	I0813 20:25:26.266450    4908 command_runner.go:124] >            ttl 30
	I0813 20:25:26.266455    4908 command_runner.go:124] >         }
	I0813 20:25:26.266459    4908 command_runner.go:124] >         prometheus :9153
	I0813 20:25:26.266465    4908 command_runner.go:124] >         forward . /etc/resolv.conf {
	I0813 20:25:26.266470    4908 command_runner.go:124] >            max_concurrent 1000
	I0813 20:25:26.266475    4908 command_runner.go:124] >         }
	I0813 20:25:26.266479    4908 command_runner.go:124] >         cache 30
	I0813 20:25:26.266482    4908 command_runner.go:124] >         loop
	I0813 20:25:26.266486    4908 command_runner.go:124] >         reload
	I0813 20:25:26.266490    4908 command_runner.go:124] >         loadbalance
	I0813 20:25:26.266493    4908 command_runner.go:124] >     }
	I0813 20:25:26.266498    4908 command_runner.go:124] > kind: ConfigMap
	I0813 20:25:26.266501    4908 command_runner.go:124] > metadata:
	I0813 20:25:26.266513    4908 command_runner.go:124] >   creationTimestamp: "2021-08-13T20:25:12Z"
	I0813 20:25:26.266521    4908 command_runner.go:124] >   name: coredns
	I0813 20:25:26.266528    4908 command_runner.go:124] >   namespace: kube-system
	I0813 20:25:26.266535    4908 command_runner.go:124] >   resourceVersion: "281"
	I0813 20:25:26.266544    4908 command_runner.go:124] >   uid: 926124bc-ba0a-4974-ac80-8723f8307429
	I0813 20:25:26.272432    4908 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:25:26.272652    4908 kapi.go:59] client config for multinode-20210813202419-30853: &rest.Config{Host:"https://192.168.39.64:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-308
53/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:25:26.272894    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:25:26.273876    4908 node_ready.go:35] waiting up to 6m0s for node "multinode-20210813202419-30853" to be "Ready" ...
	I0813 20:25:26.273943    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:26.273951    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:26.273956    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:26.273960    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:26.276760    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:26.276773    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:26.276778    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:26 GMT
	I0813 20:25:26.276783    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:26.276787    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:26.276790    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:26.276795    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:26.277481    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:26.279715    4908 node_ready.go:49] node "multinode-20210813202419-30853" has status "Ready":"True"
	I0813 20:25:26.279731    4908 node_ready.go:38] duration metric: took 5.835411ms waiting for node "multinode-20210813202419-30853" to be "Ready" ...
	I0813 20:25:26.279738    4908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:25:26.279795    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods
	I0813 20:25:26.279804    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:26.279809    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:26.279813    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:26.283212    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:26.283229    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:26.283236    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:26.283240    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:26.283246    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:26.283251    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:26 GMT
	I0813 20:25:26.283256    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:26.284034    4908 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"434","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 53650 chars]
	I0813 20:25:26.289384    4908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:26.289452    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:26.289465    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:26.289471    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:26.289475    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:26.292557    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:26.292580    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:26.292587    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:26.292593    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:26.292599    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:26.292604    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:26 GMT
	I0813 20:25:26.292609    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:26.292831    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"434","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4268 chars]
	I0813 20:25:26.298289    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:26.298318    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:26.298326    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:26.298333    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:26.300670    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:26.300685    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:26.300689    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:26.300692    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:26.300695    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:26.300698    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:26.300701    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:26 GMT
	I0813 20:25:26.300903    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:26.801597    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:26.801627    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:26.801635    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:26.801641    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:26.804860    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:26.804882    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:26.804889    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:26 GMT
	I0813 20:25:26.804895    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:26.804900    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:26.804905    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:26.804911    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:26.805028    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"455","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 20:25:26.805437    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:26.805462    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:26.805469    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:26.805483    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:26.807768    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:26.807789    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:26.807795    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:26.807800    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:26.807804    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:26.807808    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:26 GMT
	I0813 20:25:26.807813    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:26.807988    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:27.301610    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:27.301640    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:27.301647    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:27.301652    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:27.305200    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:27.305225    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:27.305231    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:27.305236    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:27.305241    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:27.305245    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:27.305249    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:27 GMT
	I0813 20:25:27.305773    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"455","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 20:25:27.306227    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:27.306248    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:27.306255    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:27.306260    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:27.308537    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:27.308551    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:27.308556    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:27.308559    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:27 GMT
	I0813 20:25:27.308562    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:27.308567    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:27.308571    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:27.308925    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:27.801623    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:27.801659    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:27.801668    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:27.801674    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:27.804707    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:27.804732    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:27.804737    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:27.804740    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:27.804743    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:27.804746    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:27.804750    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:27 GMT
	I0813 20:25:27.804907    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"455","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 20:25:27.805342    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:27.805364    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:27.805372    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:27.805377    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:27.808023    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:27.808034    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:27.808038    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:27.808041    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:27.808044    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:27.808047    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:27.808049    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:27 GMT
	I0813 20:25:27.808384    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:28.142997    4908 command_runner.go:124] > storageclass.storage.k8s.io/standard created
	I0813 20:25:28.148693    4908 command_runner.go:124] > serviceaccount/storage-provisioner created
	I0813 20:25:28.148722    4908 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0813 20:25:28.148793    4908 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.915348399s)
	I0813 20:25:28.148843    4908 main.go:130] libmachine: Making call to close driver server
	I0813 20:25:28.148864    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .Close
	I0813 20:25:28.149152    4908 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:25:28.149210    4908 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:25:28.149227    4908 main.go:130] libmachine: Making call to close driver server
	I0813 20:25:28.149237    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .Close
	I0813 20:25:28.149509    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Closing plugin on server side
	I0813 20:25:28.149551    4908 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:25:28.149562    4908 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:25:28.149575    4908 main.go:130] libmachine: Making call to close driver server
	I0813 20:25:28.149587    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .Close
	I0813 20:25:28.149857    4908 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:25:28.149878    4908 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:25:28.154782    4908 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0813 20:25:28.170480    4908 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0813 20:25:28.182623    4908 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created
	I0813 20:25:28.202105    4908 command_runner.go:124] > pod/storage-provisioner created
	I0813 20:25:28.204764    4908 command_runner.go:124] > configmap/coredns replaced
	I0813 20:25:28.204795    4908 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.931877788s)
	I0813 20:25:28.204809    4908 start.go:728] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 20:25:28.204923    4908 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.954041212s)
	I0813 20:25:28.204954    4908 main.go:130] libmachine: Making call to close driver server
	I0813 20:25:28.204965    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .Close
	I0813 20:25:28.205213    4908 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:25:28.205235    4908 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:25:28.205246    4908 main.go:130] libmachine: Making call to close driver server
	I0813 20:25:28.205256    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .Close
	I0813 20:25:28.205478    4908 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:25:28.205494    4908 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:25:28.207371    4908 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0813 20:25:28.207397    4908 addons.go:344] enableAddons completed in 2.267327105s
	I0813 20:25:28.301769    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:28.301791    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:28.301797    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:28.301801    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:28.311305    4908 round_trippers.go:457] Response Status: 200 OK in 9 milliseconds
	I0813 20:25:28.311330    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:28.311338    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:28.311344    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:28 GMT
	I0813 20:25:28.311349    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:28.311355    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:28.311360    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:28.311529    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"455","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 20:25:28.311965    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:28.311995    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:28.312003    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:28.312010    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:28.320767    4908 round_trippers.go:457] Response Status: 200 OK in 8 milliseconds
	I0813 20:25:28.320787    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:28.320794    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:28.320801    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:28.320808    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:28.320813    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:28.320822    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:28 GMT
	I0813 20:25:28.321083    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:28.321438    4908 pod_ready.go:102] pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace has status "Ready":"False"
	I0813 20:25:28.801730    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:28.801760    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:28.801768    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:28.801775    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:28.804548    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:28.804572    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:28.804579    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:28.804583    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:28.804588    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:28.804592    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:28 GMT
	I0813 20:25:28.804596    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:28.804781    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"455","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 20:25:28.805311    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:28.805329    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:28.805336    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:28.805342    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:28.808044    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:28.808062    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:28.808069    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:28.808073    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:28.808078    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:28.808082    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:28.808087    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:28 GMT
	I0813 20:25:28.808746    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:29.302322    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:29.302354    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:29.302363    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:29.302370    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:29.306794    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:29.306814    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:29.306819    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:29.306823    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:29.306828    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:29.306832    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:29 GMT
	I0813 20:25:29.306836    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:29.307190    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"455","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 20:25:29.307520    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:29.307532    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:29.307537    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:29.307541    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:29.309520    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:29.309539    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:29.309544    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:29.309549    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:29.309553    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:29.309558    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:29.309563    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:29 GMT
	I0813 20:25:29.309916    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:29.801509    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:29.801533    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:29.801538    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:29.801542    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:29.807276    4908 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 20:25:29.807294    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:29.807299    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:29.807302    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:29.807306    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:29 GMT
	I0813 20:25:29.807309    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:29.807311    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:29.807478    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"484","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5962 chars]
	I0813 20:25:29.808065    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:29.808089    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:29.808096    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:29.808102    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:29.810517    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:29.810533    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:29.810538    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:29.810543    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:29.810547    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:29.810552    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:29.810556    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:29 GMT
	I0813 20:25:29.810658    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:30.302310    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:30.302339    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:30.302346    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:30.302352    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:30.305420    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:30.305435    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:30.305440    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:30.305445    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:30 GMT
	I0813 20:25:30.305450    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:30.305454    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:30.305458    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:30.306099    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"484","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5962 chars]
	I0813 20:25:30.306488    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:30.306506    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:30.306511    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:30.306515    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:30.309527    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:30.309540    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:30.309545    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:30.309548    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:30 GMT
	I0813 20:25:30.309551    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:30.309554    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:30.309557    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:30.309934    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:30.801554    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:30.801577    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:30.801583    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:30.801587    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:30.804052    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:30.804069    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:30.804076    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:30.804081    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:30.804086    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:30.804091    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:30.804095    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:30 GMT
	I0813 20:25:30.804584    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"484","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5962 chars]
	I0813 20:25:30.804997    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:30.805016    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:30.805022    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:30.805028    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:30.807354    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:30.807365    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:30.807369    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:30.807372    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:30.807375    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:30.807378    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:30.807381    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:30 GMT
	I0813 20:25:30.807571    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:30.807885    4908 pod_ready.go:102] pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace has status "Ready":"False"
	I0813 20:25:31.301912    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:31.301936    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:31.301941    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:31.301945    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:31.306270    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:31.306292    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:31.306301    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:31.306306    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:31.306310    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:31.306314    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:31.306319    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:31 GMT
	I0813 20:25:31.306656    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"484","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5962 chars]
	I0813 20:25:31.307097    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:31.307115    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:31.307122    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:31.307128    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:31.309737    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:31.309753    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:31.309761    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:31.309766    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:31.309770    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:31.309775    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:31.309779    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:31 GMT
	I0813 20:25:31.309855    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:31.801440    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:31.801470    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:31.801477    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:31.801484    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:31.804630    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:31.804650    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:31.804656    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:31.804661    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:31 GMT
	I0813 20:25:31.804665    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:31.804670    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:31.804674    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:31.804755    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"484","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5962 chars]
	I0813 20:25:31.805087    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:31.805103    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:31.805109    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:31.805115    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:31.808371    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:31.808391    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:31.808397    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:31.808402    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:31.808405    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:31.808409    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:31.808412    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:31 GMT
	I0813 20:25:31.808491    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:32.302122    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:32.302155    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:32.302162    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:32.302168    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:32.305946    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:32.305959    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:32.305965    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:32.305970    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:32.305974    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:32.305979    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:32.305984    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:32 GMT
	I0813 20:25:32.306325    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"484","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5962 chars]
	I0813 20:25:32.306651    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:32.306666    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:32.306675    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:32.306688    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:32.308656    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:32.308673    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:32.308678    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:32.308681    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:32 GMT
	I0813 20:25:32.308684    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:32.308687    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:32.308690    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:32.308926    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:32.801598    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:32.801628    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:32.801636    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:32.801642    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:32.805937    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:32.805961    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:32.805967    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:32.805975    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:32.805991    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:32.805999    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:32.806003    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:32 GMT
	I0813 20:25:32.806475    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"499","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5733 chars]
	I0813 20:25:32.806977    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:32.807001    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:32.807009    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:32.807016    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:32.810106    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:32.810125    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:32.810131    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:32.810136    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:32.810140    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:32.810145    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:32.810150    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:32 GMT
	I0813 20:25:32.810332    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:32.810685    4908 pod_ready.go:92] pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace has status "Ready":"True"
	I0813 20:25:32.810711    4908 pod_ready.go:81] duration metric: took 6.521303635s waiting for pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:32.810726    4908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-nnsgn" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:32.810796    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:32.810810    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:32.810816    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:32.810821    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:32.814812    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:32.814830    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:32.814836    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:32.814841    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:32.814845    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:32 GMT
	I0813 20:25:32.814865    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:32.814870    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:32.816054    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:32.816408    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:32.816423    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:32.816429    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:32.816433    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:32.819832    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:32.819850    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:32.819859    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:32.819863    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:32.819868    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:32.819872    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:32.819877    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:32 GMT
	I0813 20:25:32.820682    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:33.321758    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:33.321784    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:33.321790    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:33.321794    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:33.324800    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:33.324823    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:33.324830    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:33.324835    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:33.324839    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:33.324844    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:33.324848    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:33 GMT
	I0813 20:25:33.325035    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:33.325506    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:33.325530    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:33.325536    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:33.325540    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:33.327692    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:33.327711    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:33.327718    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:33.327723    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:33.327728    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:33.327733    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:33.327751    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:33 GMT
	I0813 20:25:33.328140    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:33.821834    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:33.821860    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:33.821866    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:33.821871    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:33.828588    4908 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0813 20:25:33.828613    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:33.828620    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:33.828625    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:33.828628    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:33.828631    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:33.828636    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:33 GMT
	I0813 20:25:33.829958    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:33.830380    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:33.830397    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:33.830403    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:33.830407    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:33.832893    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:33.832915    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:33.832922    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:33.832926    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:33 GMT
	I0813 20:25:33.832931    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:33.832936    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:33.832940    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:33.833136    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:34.321464    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:34.321493    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:34.321499    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:34.321503    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:34.324285    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:34.324309    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:34.324315    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:34.324320    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:34.324324    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:34.324330    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:34.324334    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:34 GMT
	I0813 20:25:34.324568    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:34.324918    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:34.324936    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:34.324942    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:34.324948    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:34.328440    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:34.328457    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:34.328463    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:34.328468    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:34.328472    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:34.328477    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:34.328481    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:34 GMT
	I0813 20:25:34.328952    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:34.821592    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:34.821619    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:34.821625    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:34.821629    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:34.824680    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:34.824697    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:34.824702    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:34.824705    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:34.824709    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:34.824712    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:34.824715    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:34 GMT
	I0813 20:25:34.824805    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:34.825202    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:34.825220    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:34.825227    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:34.825234    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:34.828083    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:34.828104    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:34.828109    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:34.828112    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:34.828115    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:34.828118    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:34.828121    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:34 GMT
	I0813 20:25:34.828281    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:34.828526    4908 pod_ready.go:102] pod "coredns-558bd4d5db-nnsgn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:25:35.321987    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:35.322012    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:35.322017    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:35.322022    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:35.328029    4908 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 20:25:35.328049    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:35.328055    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:35.328060    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:35.328064    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:35.328068    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:35.328072    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:35 GMT
	I0813 20:25:35.328950    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:35.329294    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:35.329311    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:35.329317    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:35.329323    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:35.333158    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:35.333176    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:35.333182    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:35.333187    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:35.333191    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:35.333194    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:35.333197    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:35 GMT
	I0813 20:25:35.334216    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:35.822051    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:35.822078    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:35.822087    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:35.822091    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:35.826605    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:35.826628    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:35.826634    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:35.826639    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:35 GMT
	I0813 20:25:35.826643    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:35.826646    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:35.826649    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:35.827191    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:35.827509    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:35.827521    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:35.827526    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:35.827530    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:35.830085    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:35.830100    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:35.830106    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:35 GMT
	I0813 20:25:35.830110    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:35.830115    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:35.830120    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:35.830123    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:35.830664    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:36.321865    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:36.321891    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:36.321896    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:36.321901    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:36.325423    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:36.325442    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:36.325448    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:36.325453    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:36.325457    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:36.325461    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:36.325465    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:36 GMT
	I0813 20:25:36.325905    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:36.326264    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:36.326315    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:36.326339    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:36.326346    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:36.329645    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:36.329661    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:36.329665    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:36.329669    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:36.329672    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:36.329674    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:36.329678    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:36 GMT
	I0813 20:25:36.329856    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:36.821488    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:36.821515    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:36.821523    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:36.821528    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:36.824589    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:36.824625    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:36.824633    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:36.824637    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:36.824642    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:36 GMT
	I0813 20:25:36.824646    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:36.824654    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:36.825014    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:36.825359    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:36.825375    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:36.825380    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:36.825384    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:36.827882    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:36.827905    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:36.827912    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:36.827917    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:36.827921    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:36.827925    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:36.827930    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:36 GMT
	I0813 20:25:36.828088    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:37.321802    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:37.321827    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:37.321833    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:37.321837    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:37.326481    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:37.326498    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:37.326504    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:37.326507    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:37.326510    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:37.326513    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:37.326516    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:37 GMT
	I0813 20:25:37.326717    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:37.327189    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:37.327210    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:37.327217    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:37.327223    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:37.329586    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:37.329603    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:37.329608    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:37.329613    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:37 GMT
	I0813 20:25:37.329619    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:37.329624    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:37.329628    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:37.329852    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:37.330251    4908 pod_ready.go:102] pod "coredns-558bd4d5db-nnsgn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:25:37.821454    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:37.821484    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:37.821492    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:37.821498    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:37.826053    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:37.826071    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:37.826076    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:37.826079    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:37.826082    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:37.826085    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:37.826088    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:37 GMT
	I0813 20:25:37.826269    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:37.826627    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:37.826644    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:37.826649    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:37.826657    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:37.829440    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:37.829450    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:37.829454    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:37.829457    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:37.829460    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:37 GMT
	I0813 20:25:37.829463    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:37.829466    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:37.829760    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:38.321428    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:38.321455    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.321461    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.321465    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.324410    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:38.324425    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.324430    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.324433    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.324436    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.324439    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.324444    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.324775    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:38.325171    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:38.325193    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.325200    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.325207    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.327948    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:38.327959    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.327963    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.327966    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.327969    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.327972    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.327975    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.328511    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:38.821159    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:38.821185    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.821190    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.821195    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.823634    4908 round_trippers.go:457] Response Status: 404 Not Found in 2 milliseconds
	I0813 20:25:38.823645    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.823649    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.823657    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.823662    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.823666    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.823673    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.823677    4908 round_trippers.go:463]     Content-Length: 216
	I0813 20:25:38.823911    4908 request.go:1123] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-558bd4d5db-nnsgn\" not found","reason":"NotFound","details":{"name":"coredns-558bd4d5db-nnsgn","kind":"pods"},"code":404}
	I0813 20:25:38.824430    4908 pod_ready.go:97] error getting pod "coredns-558bd4d5db-nnsgn" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-nnsgn" not found
	I0813 20:25:38.824456    4908 pod_ready.go:81] duration metric: took 6.013715908s waiting for pod "coredns-558bd4d5db-nnsgn" in "kube-system" namespace to be "Ready" ...
	E0813 20:25:38.824466    4908 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-nnsgn" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-nnsgn" not found
	I0813 20:25:38.824475    4908 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.824563    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210813202419-30853
	I0813 20:25:38.824576    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.824583    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.824589    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.827235    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:38.827265    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.827270    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.827273    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.827277    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.827279    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.827282    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.827427    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210813202419-30853","namespace":"kube-system","uid":"0e8c51de-4800-4c2d-af81-4f4f197d3cd5","resourceVersion":"491","creationTimestamp":"2021-08-13T20:25:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.64:2379","kubernetes.io/config.hash":"b2e5f07a9c29a3554b1f5628928cde4b","kubernetes.io/config.mirror":"b2e5f07a9c29a3554b1f5628928cde4b","kubernetes.io/config.seen":"2021-08-13T20:25:00.776305134Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5569 chars]
	I0813 20:25:38.827726    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:38.827739    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.827744    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.827748    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.830263    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:38.830275    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.830279    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.830282    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.830285    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.830288    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.830290    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.830967    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:38.831278    4908 pod_ready.go:92] pod "etcd-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:25:38.831296    4908 pod_ready.go:81] duration metric: took 6.782033ms waiting for pod "etcd-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.831311    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.831365    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210813202419-30853
	I0813 20:25:38.831376    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.831382    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.831388    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.833676    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:38.833692    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.833697    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.833702    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.833706    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.833710    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.833714    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.834111    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210813202419-30853","namespace":"kube-system","uid":"53b6207c-cf99-4cb1-b237-0e69df65538b","resourceVersion":"478","creationTimestamp":"2021-08-13T20:25:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.64:8443","kubernetes.io/config.hash":"914dc216865e390473fe61a3bb624cd9","kubernetes.io/config.mirror":"914dc216865e390473fe61a3bb624cd9","kubernetes.io/config.seen":"2021-08-13T20:25:00.776307664Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 7249 chars]
	I0813 20:25:38.834365    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:38.834376    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.834380    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.834384    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.836187    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:38.836198    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.836202    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.836206    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.836209    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.836212    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.836215    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.836358    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:38.836630    4908 pod_ready.go:92] pod "kube-apiserver-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:25:38.836642    4908 pod_ready.go:81] duration metric: took 5.323998ms waiting for pod "kube-apiserver-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.836650    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.836690    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210813202419-30853
	I0813 20:25:38.836699    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.836704    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.836708    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.838515    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:38.838527    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.838531    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.838534    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.838537    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.838540    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.838543    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.838799    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210813202419-30853","namespace":"kube-system","uid":"f1752bba-a132-4093-8ff3-ad48483d468b","resourceVersion":"475","creationTimestamp":"2021-08-13T20:25:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a2845623a5b448da54677ebde58b73a6","kubernetes.io/config.mirror":"a2845623a5b448da54677ebde58b73a6","kubernetes.io/config.seen":"2021-08-13T20:25:00.776309845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi
g.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.s [truncated 6810 chars]
	I0813 20:25:38.839137    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:38.839153    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.839158    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.839163    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.841254    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:38.841266    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.841269    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.841273    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.841276    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.841279    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.841282    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.841452    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:38.841692    4908 pod_ready.go:92] pod "kube-controller-manager-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:25:38.841708    4908 pod_ready.go:81] duration metric: took 5.049968ms waiting for pod "kube-controller-manager-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.841719    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rb42p" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.841770    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rb42p
	I0813 20:25:38.841782    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.841788    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.841794    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.843333    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:38.843349    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.843356    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.843361    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.843365    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.843370    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.843374    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.843750    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rb42p","generateName":"kube-proxy-","namespace":"kube-system","uid":"5633ede2-5578-4565-97af-b83cf1b25f0d","resourceVersion":"459","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb4b18d1-5cff-490a-b573-900487c4d9e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb4b18d1-5cff-490a-b573-900487c4d9e7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5758 chars]
	I0813 20:25:38.843990    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:38.844001    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.844006    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.844010    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.846335    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:38.846346    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.846352    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.846356    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.846361    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.846365    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.846368    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.846808    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:38.847106    4908 pod_ready.go:92] pod "kube-proxy-rb42p" in "kube-system" namespace has status "Ready":"True"
	I0813 20:25:38.847123    4908 pod_ready.go:81] duration metric: took 5.39711ms waiting for pod "kube-proxy-rb42p" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.847133    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.847197    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813202419-30853
	I0813 20:25:38.847209    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.847215    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.847221    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.851869    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:38.851877    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.851880    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.851883    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.851886    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.851889    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.851895    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.852320    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210813202419-30853","namespace":"kube-system","uid":"ed906c56-f110-4e49-aa1c-5e0e0b8cb88c","resourceVersion":"384","creationTimestamp":"2021-08-13T20:25:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e846b027c41f0882917076be3af95ba2","kubernetes.io/config.mirror":"e846b027c41f0882917076be3af95ba2","kubernetes.io/config.seen":"2021-08-13T20:25:00.776286387Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:ku
bernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labe [truncated 4540 chars]
	I0813 20:25:39.021810    4908 request.go:600] Waited for 169.226887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:39.021868    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:39.021874    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:39.021879    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:39.021884    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:39.067997    4908 round_trippers.go:457] Response Status: 200 OK in 46 milliseconds
	I0813 20:25:39.068023    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:39.068030    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:39.068034    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:39.068039    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:39.068043    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:39 GMT
	I0813 20:25:39.068047    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:39.068186    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:39.068459    4908 pod_ready.go:92] pod "kube-scheduler-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:25:39.068472    4908 pod_ready.go:81] duration metric: took 221.329677ms waiting for pod "kube-scheduler-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:39.068483    4908 pod_ready.go:38] duration metric: took 12.788734644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:25:39.068507    4908 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:25:39.068559    4908 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:25:39.080065    4908 command_runner.go:124] > 2597
	I0813 20:25:39.080421    4908 api_server.go:70] duration metric: took 13.140417912s to wait for apiserver process to appear ...
	I0813 20:25:39.080435    4908 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:25:39.080446    4908 api_server.go:239] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0813 20:25:39.087991    4908 api_server.go:265] https://192.168.39.64:8443/healthz returned 200:
	ok
	I0813 20:25:39.088054    4908 round_trippers.go:432] GET https://192.168.39.64:8443/version?timeout=32s
	I0813 20:25:39.088064    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:39.088070    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:39.088084    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:39.089109    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:39.089123    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:39.089128    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:39.089133    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:39.089137    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:39.089142    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:39.089146    4908 round_trippers.go:463]     Content-Length: 263
	I0813 20:25:39.089149    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:39 GMT
	I0813 20:25:39.089270    4908 request.go:1123] Response Body: {
	  "major": "1",
	  "minor": "21",
	  "gitVersion": "v1.21.3",
	  "gitCommit": "ca643a4d1f7bfe34773c74f79527be4afd95bf39",
	  "gitTreeState": "clean",
	  "buildDate": "2021-07-15T20:59:07Z",
	  "goVersion": "go1.16.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0813 20:25:39.089363    4908 api_server.go:139] control plane version: v1.21.3
	I0813 20:25:39.089380    4908 api_server.go:129] duration metric: took 8.93951ms to wait for apiserver health ...
	I0813 20:25:39.089389    4908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:25:39.222007    4908 request.go:600] Waited for 132.539789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods
	I0813 20:25:39.222073    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods
	I0813 20:25:39.222081    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:39.222089    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:39.222131    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:39.229677    4908 round_trippers.go:457] Response Status: 200 OK in 7 milliseconds
	I0813 20:25:39.229698    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:39.229705    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:39.229710    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:39.229713    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:39.229716    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:39 GMT
	I0813 20:25:39.229719    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:39.232359    4908 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"499","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 52849 chars]
	I0813 20:25:39.233519    4908 system_pods.go:59] 8 kube-system pods found
	I0813 20:25:39.233539    4908 system_pods.go:61] "coredns-558bd4d5db-58k2l" [0431b736-8284-40c7-9bc4-fcc968e4c41b] Running
	I0813 20:25:39.233544    4908 system_pods.go:61] "etcd-multinode-20210813202419-30853" [0e8c51de-4800-4c2d-af81-4f4f197d3cd5] Running
	I0813 20:25:39.233547    4908 system_pods.go:61] "kindnet-hc4k2" [8c73e66e-2ec6-4a1b-a7af-3edb2c517f18] Running
	I0813 20:25:39.233551    4908 system_pods.go:61] "kube-apiserver-multinode-20210813202419-30853" [53b6207c-cf99-4cb1-b237-0e69df65538b] Running
	I0813 20:25:39.233555    4908 system_pods.go:61] "kube-controller-manager-multinode-20210813202419-30853" [f1752bba-a132-4093-8ff3-ad48483d468b] Running
	I0813 20:25:39.233561    4908 system_pods.go:61] "kube-proxy-rb42p" [5633ede2-5578-4565-97af-b83cf1b25f0d] Running
	I0813 20:25:39.233564    4908 system_pods.go:61] "kube-scheduler-multinode-20210813202419-30853" [ed906c56-f110-4e49-aa1c-5e0e0b8cb88c] Running
	I0813 20:25:39.233568    4908 system_pods.go:61] "storage-provisioner" [7839155d-5552-45cb-ab31-a243fd82f32e] Running
	I0813 20:25:39.233573    4908 system_pods.go:74] duration metric: took 144.178753ms to wait for pod list to return data ...
	I0813 20:25:39.233588    4908 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:25:39.422006    4908 request.go:600] Waited for 188.350998ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/namespaces/default/serviceaccounts
	I0813 20:25:39.422073    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/default/serviceaccounts
	I0813 20:25:39.422078    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:39.422083    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:39.422093    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:39.425157    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:39.425180    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:39.425187    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:39.425192    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:39.425196    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:39.425199    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:39.425202    4908 round_trippers.go:463]     Content-Length: 304
	I0813 20:25:39.425205    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:39 GMT
	I0813 20:25:39.425226    4908 request.go:1123] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6031096d-6790-4fd8-abfd-6fb4c0c8f61a","resourceVersion":"406","creationTimestamp":"2021-08-13T20:25:25Z"},"secrets":[{"name":"default-token-9blrs"}]}]}
	I0813 20:25:39.425736    4908 default_sa.go:45] found service account: "default"
	I0813 20:25:39.425753    4908 default_sa.go:55] duration metric: took 192.159209ms for default service account to be created ...
	I0813 20:25:39.425761    4908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:25:39.622231    4908 request.go:600] Waited for 196.387039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods
	I0813 20:25:39.622290    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods
	I0813 20:25:39.622297    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:39.622302    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:39.622306    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:39.625739    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:39.625764    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:39.625771    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:39.625775    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:39.625779    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:39.625784    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:39 GMT
	I0813 20:25:39.625788    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:39.626375    4908 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"499","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 52849 chars]
	I0813 20:25:39.628032    4908 system_pods.go:86] 8 kube-system pods found
	I0813 20:25:39.628059    4908 system_pods.go:89] "coredns-558bd4d5db-58k2l" [0431b736-8284-40c7-9bc4-fcc968e4c41b] Running
	I0813 20:25:39.628067    4908 system_pods.go:89] "etcd-multinode-20210813202419-30853" [0e8c51de-4800-4c2d-af81-4f4f197d3cd5] Running
	I0813 20:25:39.628097    4908 system_pods.go:89] "kindnet-hc4k2" [8c73e66e-2ec6-4a1b-a7af-3edb2c517f18] Running
	I0813 20:25:39.628106    4908 system_pods.go:89] "kube-apiserver-multinode-20210813202419-30853" [53b6207c-cf99-4cb1-b237-0e69df65538b] Running
	I0813 20:25:39.628117    4908 system_pods.go:89] "kube-controller-manager-multinode-20210813202419-30853" [f1752bba-a132-4093-8ff3-ad48483d468b] Running
	I0813 20:25:39.628125    4908 system_pods.go:89] "kube-proxy-rb42p" [5633ede2-5578-4565-97af-b83cf1b25f0d] Running
	I0813 20:25:39.628130    4908 system_pods.go:89] "kube-scheduler-multinode-20210813202419-30853" [ed906c56-f110-4e49-aa1c-5e0e0b8cb88c] Running
	I0813 20:25:39.628138    4908 system_pods.go:89] "storage-provisioner" [7839155d-5552-45cb-ab31-a243fd82f32e] Running
	I0813 20:25:39.628151    4908 system_pods.go:126] duration metric: took 202.383679ms to wait for k8s-apps to be running ...
	I0813 20:25:39.628164    4908 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:25:39.628217    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:25:39.639184    4908 system_svc.go:56] duration metric: took 11.015292ms WaitForService to wait for kubelet.
	I0813 20:25:39.639209    4908 kubeadm.go:547] duration metric: took 13.699205758s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:25:39.639228    4908 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:25:39.821632    4908 request.go:600] Waited for 182.336333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/nodes
	I0813 20:25:39.821699    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes
	I0813 20:25:39.821709    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:39.821717    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:39.821729    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:39.825121    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:39.825133    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:39.825139    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:39.825144    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:39.825149    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:39.825154    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:39.825160    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:39 GMT
	I0813 20:25:39.825313    4908 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed
-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operatio [truncated 6606 chars]
	I0813 20:25:39.826298    4908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:25:39.826327    4908 node_conditions.go:123] node cpu capacity is 2
	I0813 20:25:39.826345    4908 node_conditions.go:105] duration metric: took 187.112573ms to run NodePressure ...
	I0813 20:25:39.826358    4908 start.go:231] waiting for startup goroutines ...
	I0813 20:25:39.828691    4908 out.go:177] 
	I0813 20:25:39.828879    4908 config.go:177] Loaded profile config "multinode-20210813202419-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:25:39.828955    4908 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/config.json ...
	I0813 20:25:39.830971    4908 out.go:177] * Starting node multinode-20210813202419-30853-m02 in cluster multinode-20210813202419-30853
	I0813 20:25:39.830993    4908 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:25:39.831011    4908 cache.go:56] Caching tarball of preloaded images
	I0813 20:25:39.831144    4908 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:25:39.831164    4908 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:25:39.831245    4908 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/config.json ...
	I0813 20:25:39.831383    4908 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:25:39.831407    4908 start.go:313] acquiring machines lock for multinode-20210813202419-30853-m02: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:25:39.831479    4908 start.go:317] acquired machines lock for "multinode-20210813202419-30853-m02" in 57.116µs
	I0813 20:25:39.831500    4908 start.go:89] Provisioning new machine with config: &{Name:multinode-20210813202419-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.21.3 ClusterName:multinode-20210813202419-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.21
.3 ControlPlane:false Worker:true}
	I0813 20:25:39.831569    4908 start.go:126] createHost starting for "m02" (driver="kvm2")
	I0813 20:25:39.833361    4908 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 20:25:39.833442    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:25:39.833475    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:25:39.843918    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35515
	I0813 20:25:39.844325    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:25:39.844771    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:25:39.844809    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:25:39.845137    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:25:39.845319    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetMachineName
	I0813 20:25:39.845475    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:25:39.845608    4908 start.go:160] libmachine.API.Create for "multinode-20210813202419-30853" (driver="kvm2")
	I0813 20:25:39.845639    4908 client.go:168] LocalClient.Create starting
	I0813 20:25:39.845673    4908 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:25:39.845703    4908 main.go:130] libmachine: Decoding PEM data...
	I0813 20:25:39.845724    4908 main.go:130] libmachine: Parsing certificate...
	I0813 20:25:39.845839    4908 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:25:39.845859    4908 main.go:130] libmachine: Decoding PEM data...
	I0813 20:25:39.845870    4908 main.go:130] libmachine: Parsing certificate...
	I0813 20:25:39.845910    4908 main.go:130] libmachine: Running pre-create checks...
	I0813 20:25:39.845919    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .PreCreateCheck
	I0813 20:25:39.846067    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetConfigRaw
	I0813 20:25:39.846445    4908 main.go:130] libmachine: Creating machine...
	I0813 20:25:39.846462    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .Create
	I0813 20:25:39.846581    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Creating KVM machine...
	I0813 20:25:39.849346    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found existing default KVM network
	I0813 20:25:39.849493    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found existing private KVM network mk-multinode-20210813202419-30853
	I0813 20:25:39.849607    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02 ...
	I0813 20:25:39.849636    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 20:25:39.849670    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:39.849568    5183 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:25:39.849755    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 20:25:40.025304    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:40.025182    5183 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa...
	I0813 20:25:40.264706    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:40.264555    5183 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/multinode-20210813202419-30853-m02.rawdisk...
	I0813 20:25:40.264750    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Writing magic tar header
	I0813 20:25:40.264800    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Writing SSH key tar header
	I0813 20:25:40.264821    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:40.264687    5183 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02 ...
	I0813 20:25:40.264842    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02
	I0813 20:25:40.264870    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 20:25:40.264895    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:25:40.264917    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02 (perms=drwx------)
	I0813 20:25:40.264945    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 20:25:40.264965    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 20:25:40.264986    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337
	I0813 20:25:40.265004    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 20:25:40.265024    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 20:25:40.265039    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 20:25:40.265052    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 20:25:40.265071    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Checking permissions on dir: /home/jenkins
	I0813 20:25:40.265085    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Checking permissions on dir: /home
	I0813 20:25:40.265100    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Skipping /home - not owner
	I0813 20:25:40.265144    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Creating domain...
	I0813 20:25:40.289147    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:39:2c:5f in network default
	I0813 20:25:40.289612    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Ensuring networks are active...
	I0813 20:25:40.289635    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:40.291637    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Ensuring network default is active
	I0813 20:25:40.291940    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Ensuring network mk-multinode-20210813202419-30853 is active
	I0813 20:25:40.292296    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Getting domain xml...
	I0813 20:25:40.294048    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Creating domain...
	I0813 20:25:40.681999    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Waiting to get IP...
	I0813 20:25:40.682880    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:40.683407    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:40.683469    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:40.683397    5183 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 20:25:40.947670    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:40.948232    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:40.948258    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:40.948175    5183 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 20:25:41.330684    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:41.331209    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:41.331233    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:41.331176    5183 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 20:25:41.755680    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:41.756133    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:41.756165    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:41.756079    5183 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 20:25:42.230640    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:42.231214    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:42.231249    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:42.231164    5183 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 20:25:42.819789    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:42.820251    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:42.820281    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:42.820194    5183 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 20:25:43.656120    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:43.656648    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:43.656673    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:43.656584    5183 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 20:25:44.404315    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:44.404883    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:44.404913    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:44.404831    5183 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 20:25:45.393153    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:45.393595    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:45.393622    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:45.393561    5183 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 20:25:46.584718    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:46.585115    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:46.585146    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:46.585081    5183 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 20:25:48.264786    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:48.265427    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:48.265461    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:48.265364    5183 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 20:25:50.612895    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:50.613360    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:50.613397    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:50.613291    5183 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 20:25:53.983576    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:53.984023    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:53.984055    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:53.983947    5183 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0813 20:25:57.105314    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.105792    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Found IP for machine: 192.168.39.3
	I0813 20:25:57.105824    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has current primary IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.105835    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Reserving static IP address...
	I0813 20:25:57.106107    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find host DHCP lease matching {name: "multinode-20210813202419-30853-m02", mac: "52:54:00:81:96:4b", ip: "192.168.39.3"} in network mk-multinode-20210813202419-30853
	I0813 20:25:57.152398    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Reserved static IP address: 192.168.39.3
	I0813 20:25:57.152442    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Getting to WaitForSSH function...
	I0813 20:25:57.152453    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Waiting for SSH to be available...
	I0813 20:25:57.157596    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.157925    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:57.157959    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.158116    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Using SSH client type: external
	I0813 20:25:57.158147    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa (-rw-------)
	I0813 20:25:57.158177    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 20:25:57.158194    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | About to run SSH command:
	I0813 20:25:57.158210    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | exit 0
	I0813 20:25:57.290436    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | SSH cmd err, output: <nil>: 
	I0813 20:25:57.291344    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) KVM machine creation complete!
	I0813 20:25:57.291404    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetConfigRaw
	I0813 20:25:57.291919    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:25:57.292092    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:25:57.292219    4908 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 20:25:57.292238    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetState
	I0813 20:25:57.295055    4908 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 20:25:57.295076    4908 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 20:25:57.295086    4908 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 20:25:57.295098    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:57.299585    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.299910    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:57.299936    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.300125    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:57.300286    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.300447    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.300599    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:57.300762    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:25:57.300929    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0813 20:25:57.300942    4908 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 20:25:57.422080    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:25:57.422111    4908 main.go:130] libmachine: Detecting the provisioner...
	I0813 20:25:57.422123    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:57.427209    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.427540    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:57.427565    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.427692    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:57.427853    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.427980    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.428076    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:57.428181    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:25:57.428340    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0813 20:25:57.428354    4908 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 20:25:57.547277    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 20:25:57.547355    4908 main.go:130] libmachine: found compatible host: buildroot
	I0813 20:25:57.547370    4908 main.go:130] libmachine: Provisioning with buildroot...
	I0813 20:25:57.547384    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetMachineName
	I0813 20:25:57.547604    4908 buildroot.go:166] provisioning hostname "multinode-20210813202419-30853-m02"
	I0813 20:25:57.547637    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetMachineName
	I0813 20:25:57.547790    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:57.553068    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.553381    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:57.553418    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.553533    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:57.553721    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.553879    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.554012    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:57.554216    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:25:57.554393    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0813 20:25:57.554413    4908 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210813202419-30853-m02 && echo "multinode-20210813202419-30853-m02" | sudo tee /etc/hostname
	I0813 20:25:57.683726    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210813202419-30853-m02
	
	I0813 20:25:57.683753    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:57.688482    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.688770    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:57.688803    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.688904    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:57.689071    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.689236    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.689373    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:57.689514    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:25:57.689641    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0813 20:25:57.689662    4908 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210813202419-30853-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210813202419-30853-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210813202419-30853-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:25:57.816877    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:25:57.816914    4908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:25:57.816929    4908 buildroot.go:174] setting up certificates
	I0813 20:25:57.816939    4908 provision.go:83] configureAuth start
	I0813 20:25:57.816948    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetMachineName
	I0813 20:25:57.817213    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetIP
	I0813 20:25:57.821850    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.822207    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:57.822237    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.822359    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:57.826610    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.826921    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:57.826951    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.827064    4908 provision.go:138] copyHostCerts
	I0813 20:25:57.827101    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:25:57.827137    4908 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:25:57.827150    4908 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:25:57.827218    4908 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:25:57.827291    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:25:57.827317    4908 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:25:57.827329    4908 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:25:57.827358    4908 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:25:57.827403    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:25:57.827426    4908 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:25:57.827435    4908 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:25:57.827462    4908 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:25:57.827560    4908 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.multinode-20210813202419-30853-m02 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube multinode-20210813202419-30853-m02]
	I0813 20:25:58.099551    4908 provision.go:172] copyRemoteCerts
	I0813 20:25:58.099620    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:25:58.099652    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:58.104572    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.104921    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:58.104949    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.105064    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:58.105256    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:58.105413    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:58.105530    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa Username:docker}
	I0813 20:25:58.194743    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0813 20:25:58.194808    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:25:58.211458    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0813 20:25:58.211510    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:25:58.226802    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0813 20:25:58.226839    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0813 20:25:58.242520    4908 provision.go:86] duration metric: configureAuth took 425.571202ms
	I0813 20:25:58.242541    4908 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:25:58.242711    4908 config.go:177] Loaded profile config "multinode-20210813202419-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:25:58.242820    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:58.248090    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.248396    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:58.248426    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.248532    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:58.248715    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:58.248848    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:58.248975    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:58.249096    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:25:58.249245    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0813 20:25:58.249263    4908 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:25:58.962979    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:25:58.963017    4908 main.go:130] libmachine: Checking connection to Docker...
	I0813 20:25:58.963028    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetURL
	I0813 20:25:58.965664    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Using libvirt version 3000000
	I0813 20:25:58.969912    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.970233    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:58.970258    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.970366    4908 main.go:130] libmachine: Docker is up and running!
	I0813 20:25:58.970378    4908 main.go:130] libmachine: Reticulating splines...
	I0813 20:25:58.970388    4908 client.go:171] LocalClient.Create took 19.124740854s
	I0813 20:25:58.970410    4908 start.go:168] duration metric: libmachine.API.Create for "multinode-20210813202419-30853" took 19.124802703s
	I0813 20:25:58.970423    4908 start.go:267] post-start starting for "multinode-20210813202419-30853-m02" (driver="kvm2")
	I0813 20:25:58.970430    4908 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:25:58.970454    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:25:58.970693    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:25:58.970721    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:58.974796    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.975134    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:58.975164    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.975303    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:58.975472    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:58.975612    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:58.975729    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa Username:docker}
	I0813 20:25:59.062393    4908 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:25:59.066339    4908 command_runner.go:124] > NAME=Buildroot
	I0813 20:25:59.066361    4908 command_runner.go:124] > VERSION=2020.02.12
	I0813 20:25:59.066367    4908 command_runner.go:124] > ID=buildroot
	I0813 20:25:59.066373    4908 command_runner.go:124] > VERSION_ID=2020.02.12
	I0813 20:25:59.066378    4908 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0813 20:25:59.066740    4908 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:25:59.066759    4908 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:25:59.066809    4908 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:25:59.066929    4908 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 20:25:59.066945    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> /etc/ssl/certs/308532.pem
	I0813 20:25:59.067049    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:25:59.073037    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:25:59.089115    4908 start.go:270] post-start completed in 118.679631ms
	I0813 20:25:59.089164    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetConfigRaw
	I0813 20:25:59.089745    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetIP
	I0813 20:25:59.094392    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.094702    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:59.094735    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.094918    4908 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/config.json ...
	I0813 20:25:59.095077    4908 start.go:129] duration metric: createHost completed in 19.263499043s
	I0813 20:25:59.095089    4908 start.go:80] releasing machines lock for "multinode-20210813202419-30853-m02", held for 19.263600689s
	I0813 20:25:59.095126    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:25:59.095289    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetIP
	I0813 20:25:59.099345    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.099649    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:59.099684    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.101566    4908 out.go:177] * Found network options:
	I0813 20:25:59.102839    4908 out.go:177]   - NO_PROXY=192.168.39.64
	W0813 20:25:59.102884    4908 proxy.go:118] fail to check proxy env: Error ip not in block
	I0813 20:25:59.102919    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:25:59.103064    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:25:59.103527    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	W0813 20:25:59.103712    4908 proxy.go:118] fail to check proxy env: Error ip not in block
	I0813 20:25:59.103760    4908 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:25:59.103831    4908 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:25:59.103876    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:59.103835    4908 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:25:59.103934    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:59.108508    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.108856    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:59.108881    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.109047    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:59.109198    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:59.109322    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:59.109470    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa Username:docker}
	I0813 20:25:59.110249    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.112226    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:59.112259    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.112419    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:59.112582    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:59.112711    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:59.112839    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa Username:docker}
	I0813 20:25:59.213124    4908 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0813 20:25:59.213152    4908 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0813 20:25:59.213160    4908 command_runner.go:124] > <H1>302 Moved</H1>
	I0813 20:25:59.213167    4908 command_runner.go:124] > The document has moved
	I0813 20:25:59.213176    4908 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0813 20:25:59.213182    4908 command_runner.go:124] > </BODY></HTML>
	I0813 20:26:03.210204    4908 command_runner.go:124] > {
	I0813 20:26:03.210227    4908 command_runner.go:124] >   "images": [
	I0813 20:26:03.210231    4908 command_runner.go:124] >   ]
	I0813 20:26:03.210235    4908 command_runner.go:124] > }
	I0813 20:26:03.211432    4908 command_runner.go:124] ! time="2021-08-13T20:25:59Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0813 20:26:03.211468    4908 command_runner.go:124] ! time="2021-08-13T20:26:01Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 20:26:03.211488    4908 command_runner.go:124] ! time="2021-08-13T20:26:03Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 20:26:03.211506    4908 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.107577771s)
	I0813 20:26:03.211535    4908 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 20:26:03.211569    4908 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (4.107709659s)
	I0813 20:26:03.211579    4908 ssh_runner.go:149] Run: which lz4
	I0813 20:26:03.216118    4908 command_runner.go:124] > /bin/lz4
	I0813 20:26:03.216190    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0813 20:26:03.216272    4908 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 20:26:03.220614    4908 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:26:03.221309    4908 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:26:03.221336    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0813 20:26:05.496670    4908 crio.go:362] Took 2.280427 seconds to copy over tarball
	I0813 20:26:05.496741    4908 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 20:26:10.991280    4908 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.494512409s)
	I0813 20:26:11.395068    4908 crio.go:369] Took 5.898357 seconds t extract the tarball
	I0813 20:26:11.395087    4908 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 20:26:11.444796    4908 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:26:11.459921    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:26:11.470643    4908 docker.go:153] disabling docker service ...
	I0813 20:26:11.470696    4908 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:26:11.482481    4908 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:26:11.492174    4908 command_runner.go:124] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0813 20:26:11.492242    4908 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:26:11.633999    4908 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0813 20:26:11.634091    4908 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:26:11.645422    4908 command_runner.go:124] ! Unit docker.service does not exist, proceeding anyway.
	I0813 20:26:11.645910    4908 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0813 20:26:11.774927    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:26:11.785306    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:26:11.797355    4908 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0813 20:26:11.797379    4908 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0813 20:26:11.797879    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:26:11.805228    4908 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:26:11.805252    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:26:11.813837    4908 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:26:11.820121    4908 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:26:11.820640    4908 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:26:11.820690    4908 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:26:11.834893    4908 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:26:11.841588    4908 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:26:11.971587    4908 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:26:12.121767    4908 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:26:12.121837    4908 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:26:12.127823    4908 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0813 20:26:12.127849    4908 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0813 20:26:12.127859    4908 command_runner.go:124] > Device: 14h/20d	Inode: 30135       Links: 1
	I0813 20:26:12.127869    4908 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 20:26:12.127876    4908 command_runner.go:124] > Access: 2021-08-13 20:26:03.171125855 +0000
	I0813 20:26:12.127886    4908 command_runner.go:124] > Modify: 2021-08-13 20:25:58.879529405 +0000
	I0813 20:26:12.127895    4908 command_runner.go:124] > Change: 2021-08-13 20:25:58.879529405 +0000
	I0813 20:26:12.127902    4908 command_runner.go:124] >  Birth: -
	I0813 20:26:12.128097    4908 start.go:413] Will wait 60s for crictl version
	I0813 20:26:12.128150    4908 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:26:12.159356    4908 command_runner.go:124] > Version:  0.1.0
	I0813 20:26:12.159380    4908 command_runner.go:124] > RuntimeName:  cri-o
	I0813 20:26:12.159385    4908 command_runner.go:124] > RuntimeVersion:  1.20.2
	I0813 20:26:12.159394    4908 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0813 20:26:12.159414    4908 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 20:26:12.159501    4908 ssh_runner.go:149] Run: crio --version
	I0813 20:26:12.450092    4908 command_runner.go:124] ! time="2021-08-13T20:26:12Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 20:26:12.452148    4908 command_runner.go:124] > crio version 1.20.2
	I0813 20:26:12.452170    4908 command_runner.go:124] > Version:       1.20.2
	I0813 20:26:12.452178    4908 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 20:26:12.452182    4908 command_runner.go:124] > GitTreeState:  clean
	I0813 20:26:12.452191    4908 command_runner.go:124] > BuildDate:     2021-08-10T19:57:38Z
	I0813 20:26:12.452195    4908 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 20:26:12.452199    4908 command_runner.go:124] > Compiler:      gc
	I0813 20:26:12.452204    4908 command_runner.go:124] > Platform:      linux/amd64
	I0813 20:26:12.452269    4908 ssh_runner.go:149] Run: crio --version
	I0813 20:26:12.732826    4908 command_runner.go:124] ! time="2021-08-13T20:26:12Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 20:26:12.734818    4908 command_runner.go:124] > crio version 1.20.2
	I0813 20:26:12.734835    4908 command_runner.go:124] > Version:       1.20.2
	I0813 20:26:12.734842    4908 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 20:26:12.734847    4908 command_runner.go:124] > GitTreeState:  clean
	I0813 20:26:12.734868    4908 command_runner.go:124] > BuildDate:     2021-08-10T19:57:38Z
	I0813 20:26:12.734877    4908 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 20:26:12.734881    4908 command_runner.go:124] > Compiler:      gc
	I0813 20:26:12.734886    4908 command_runner.go:124] > Platform:      linux/amd64
	I0813 20:26:14.345675    4908 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 20:26:14.627119    4908 out.go:177]   - env NO_PROXY=192.168.39.64
	I0813 20:26:14.627232    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetIP
	I0813 20:26:14.633413    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:26:14.633777    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:26:14.633815    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:26:14.633978    4908 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 20:26:14.639555    4908 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:26:14.651171    4908 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853 for IP: 192.168.39.3
	I0813 20:26:14.651227    4908 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:26:14.651249    4908 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:26:14.651266    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0813 20:26:14.651285    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0813 20:26:14.651300    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0813 20:26:14.651319    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0813 20:26:14.651394    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 20:26:14.651442    4908 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 20:26:14.651461    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:26:14.651499    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:26:14.651535    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:26:14.651577    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:26:14.651640    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:26:14.651679    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:14.651699    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem -> /usr/share/ca-certificates/30853.pem
	I0813 20:26:14.651715    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> /usr/share/ca-certificates/308532.pem
	I0813 20:26:14.652111    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:26:14.672172    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:26:14.691853    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:26:14.709832    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:26:14.726756    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:26:14.742902    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 20:26:14.759718    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 20:26:14.776474    4908 ssh_runner.go:149] Run: openssl version
	I0813 20:26:14.782381    4908 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0813 20:26:14.782440    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:26:14.789733    4908 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:14.794316    4908 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:14.794486    4908 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:14.794531    4908 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:14.800232    4908 command_runner.go:124] > b5213941
	I0813 20:26:14.800292    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:26:14.808971    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 20:26:14.817238    4908 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 20:26:14.821902    4908 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:26:14.821929    4908 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:26:14.821963    4908 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 20:26:14.827540    4908 command_runner.go:124] > 51391683
	I0813 20:26:14.827793    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 20:26:14.835963    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 20:26:14.844187    4908 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 20:26:14.848938    4908 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:26:14.848963    4908 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:26:14.848995    4908 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 20:26:14.854773    4908 command_runner.go:124] > 3ec20f2e
	I0813 20:26:14.855261    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:26:14.863403    4908 ssh_runner.go:149] Run: crio config
	I0813 20:26:15.120564    4908 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0813 20:26:15.120604    4908 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0813 20:26:15.120614    4908 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0813 20:26:15.120619    4908 command_runner.go:124] > #
	I0813 20:26:15.120630    4908 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0813 20:26:15.120652    4908 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0813 20:26:15.120679    4908 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0813 20:26:15.120695    4908 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0813 20:26:15.120703    4908 command_runner.go:124] > # reload'.
	I0813 20:26:15.120714    4908 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0813 20:26:15.120727    4908 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0813 20:26:15.120740    4908 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0813 20:26:15.120751    4908 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0813 20:26:15.120757    4908 command_runner.go:124] > [crio]
	I0813 20:26:15.120768    4908 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0813 20:26:15.120778    4908 command_runner.go:124] > # containers images, in this directory.
	I0813 20:26:15.120787    4908 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0813 20:26:15.120802    4908 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0813 20:26:15.120813    4908 command_runner.go:124] > #runroot = "/var/run/containers/storage"
	I0813 20:26:15.120825    4908 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0813 20:26:15.120836    4908 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0813 20:26:15.120844    4908 command_runner.go:124] > #storage_driver = "overlay"
	I0813 20:26:15.120854    4908 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0813 20:26:15.120866    4908 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0813 20:26:15.120873    4908 command_runner.go:124] > #storage_option = [
	I0813 20:26:15.120878    4908 command_runner.go:124] > #]
	I0813 20:26:15.120889    4908 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0813 20:26:15.120901    4908 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0813 20:26:15.120910    4908 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0813 20:26:15.120923    4908 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0813 20:26:15.120936    4908 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0813 20:26:15.120944    4908 command_runner.go:124] > # always happen on a node reboot
	I0813 20:26:15.120980    4908 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0813 20:26:15.120992    4908 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0813 20:26:15.121002    4908 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0813 20:26:15.121013    4908 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0813 20:26:15.121026    4908 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0813 20:26:15.121040    4908 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0813 20:26:15.121049    4908 command_runner.go:124] > [crio.api]
	I0813 20:26:15.121058    4908 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0813 20:26:15.121066    4908 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0813 20:26:15.121076    4908 command_runner.go:124] > # IP address on which the stream server will listen.
	I0813 20:26:15.121083    4908 command_runner.go:124] > stream_address = "127.0.0.1"
	I0813 20:26:15.121095    4908 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0813 20:26:15.121106    4908 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0813 20:26:15.121113    4908 command_runner.go:124] > stream_port = "0"
	I0813 20:26:15.121122    4908 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0813 20:26:15.121131    4908 command_runner.go:124] > stream_enable_tls = false
	I0813 20:26:15.121141    4908 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0813 20:26:15.121150    4908 command_runner.go:124] > stream_idle_timeout = ""
	I0813 20:26:15.121161    4908 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0813 20:26:15.121174    4908 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0813 20:26:15.121180    4908 command_runner.go:124] > # minutes.
	I0813 20:26:15.121185    4908 command_runner.go:124] > stream_tls_cert = ""
	I0813 20:26:15.121194    4908 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0813 20:26:15.121205    4908 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0813 20:26:15.121211    4908 command_runner.go:124] > stream_tls_key = ""
	I0813 20:26:15.121222    4908 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0813 20:26:15.121235    4908 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0813 20:26:15.121244    4908 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0813 20:26:15.121250    4908 command_runner.go:124] > stream_tls_ca = ""
	I0813 20:26:15.121269    4908 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 20:26:15.121276    4908 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0813 20:26:15.121287    4908 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 20:26:15.121294    4908 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0813 20:26:15.121305    4908 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0813 20:26:15.121319    4908 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0813 20:26:15.121325    4908 command_runner.go:124] > [crio.runtime]
	I0813 20:26:15.121335    4908 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0813 20:26:15.121344    4908 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0813 20:26:15.121353    4908 command_runner.go:124] > # "nofile=1024:2048"
	I0813 20:26:15.121363    4908 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0813 20:26:15.121370    4908 command_runner.go:124] > #default_ulimits = [
	I0813 20:26:15.121376    4908 command_runner.go:124] > #]
	I0813 20:26:15.121386    4908 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0813 20:26:15.121393    4908 command_runner.go:124] > no_pivot = false
	I0813 20:26:15.121403    4908 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0813 20:26:15.121441    4908 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0813 20:26:15.121452    4908 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0813 20:26:15.121462    4908 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0813 20:26:15.121473    4908 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0813 20:26:15.121480    4908 command_runner.go:124] > conmon = "/usr/libexec/crio/conmon"
	I0813 20:26:15.121488    4908 command_runner.go:124] > # Cgroup setting for conmon
	I0813 20:26:15.121495    4908 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0813 20:26:15.121504    4908 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0813 20:26:15.121512    4908 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0813 20:26:15.121523    4908 command_runner.go:124] > conmon_env = [
	I0813 20:26:15.121529    4908 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0813 20:26:15.121533    4908 command_runner.go:124] > ]
	I0813 20:26:15.121538    4908 command_runner.go:124] > # Additional environment variables to set for all the
	I0813 20:26:15.121545    4908 command_runner.go:124] > # containers. These are overridden if set in the
	I0813 20:26:15.121551    4908 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0813 20:26:15.121555    4908 command_runner.go:124] > default_env = [
	I0813 20:26:15.121558    4908 command_runner.go:124] > ]
	I0813 20:26:15.121564    4908 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0813 20:26:15.121569    4908 command_runner.go:124] > selinux = false
	I0813 20:26:15.121577    4908 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0813 20:26:15.121585    4908 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0813 20:26:15.121591    4908 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0813 20:26:15.121595    4908 command_runner.go:124] > seccomp_profile = ""
	I0813 20:26:15.121600    4908 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0813 20:26:15.121608    4908 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0813 20:26:15.121614    4908 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0813 20:26:15.121619    4908 command_runner.go:124] > # which might increase security.
	I0813 20:26:15.121624    4908 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0813 20:26:15.121631    4908 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0813 20:26:15.121638    4908 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0813 20:26:15.121645    4908 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0813 20:26:15.121651    4908 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0813 20:26:15.121657    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:26:15.121661    4908 command_runner.go:124] > apparmor_profile = "crio-default"
	I0813 20:26:15.121668    4908 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0813 20:26:15.121673    4908 command_runner.go:124] > # irqbalance daemon.
	I0813 20:26:15.121678    4908 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0813 20:26:15.121684    4908 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0813 20:26:15.121690    4908 command_runner.go:124] > cgroup_manager = "systemd"
	I0813 20:26:15.121696    4908 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0813 20:26:15.121701    4908 command_runner.go:124] > separate_pull_cgroup = ""
	I0813 20:26:15.121709    4908 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0813 20:26:15.121716    4908 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0813 20:26:15.121719    4908 command_runner.go:124] > # will be added.
	I0813 20:26:15.121724    4908 command_runner.go:124] > default_capabilities = [
	I0813 20:26:15.121729    4908 command_runner.go:124] > 	"CHOWN",
	I0813 20:26:15.121732    4908 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0813 20:26:15.121736    4908 command_runner.go:124] > 	"FSETID",
	I0813 20:26:15.121739    4908 command_runner.go:124] > 	"FOWNER",
	I0813 20:26:15.121743    4908 command_runner.go:124] > 	"SETGID",
	I0813 20:26:15.121746    4908 command_runner.go:124] > 	"SETUID",
	I0813 20:26:15.121752    4908 command_runner.go:124] > 	"SETPCAP",
	I0813 20:26:15.121757    4908 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0813 20:26:15.121760    4908 command_runner.go:124] > 	"KILL",
	I0813 20:26:15.121763    4908 command_runner.go:124] > ]
	I0813 20:26:15.121769    4908 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0813 20:26:15.121776    4908 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 20:26:15.121781    4908 command_runner.go:124] > default_sysctls = [
	I0813 20:26:15.121784    4908 command_runner.go:124] > ]
	I0813 20:26:15.121790    4908 command_runner.go:124] > # List of additional devices. specified as
	I0813 20:26:15.121798    4908 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0813 20:26:15.121804    4908 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0813 20:26:15.121809    4908 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 20:26:15.121814    4908 command_runner.go:124] > additional_devices = [
	I0813 20:26:15.121817    4908 command_runner.go:124] > ]
	I0813 20:26:15.121823    4908 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0813 20:26:15.121830    4908 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0813 20:26:15.121834    4908 command_runner.go:124] > hooks_dir = [
	I0813 20:26:15.121839    4908 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0813 20:26:15.121842    4908 command_runner.go:124] > ]
	I0813 20:26:15.121848    4908 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0813 20:26:15.121855    4908 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0813 20:26:15.121860    4908 command_runner.go:124] > # its default mounts from the following two files:
	I0813 20:26:15.121865    4908 command_runner.go:124] > #
	I0813 20:26:15.121871    4908 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0813 20:26:15.121878    4908 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0813 20:26:15.121884    4908 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0813 20:26:15.121888    4908 command_runner.go:124] > #
	I0813 20:26:15.121894    4908 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0813 20:26:15.121901    4908 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0813 20:26:15.121908    4908 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0813 20:26:15.121914    4908 command_runner.go:124] > #      only add mounts it finds in this file.
	I0813 20:26:15.121918    4908 command_runner.go:124] > #
	I0813 20:26:15.121922    4908 command_runner.go:124] > #default_mounts_file = ""
	I0813 20:26:15.121927    4908 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0813 20:26:15.121933    4908 command_runner.go:124] > pids_limit = 1024
	I0813 20:26:15.121939    4908 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0813 20:26:15.121945    4908 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0813 20:26:15.121952    4908 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0813 20:26:15.121974    4908 command_runner.go:124] > # limit is never exceeded.
	I0813 20:26:15.121984    4908 command_runner.go:124] > log_size_max = -1
	I0813 20:26:15.122064    4908 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0813 20:26:15.122074    4908 command_runner.go:124] > log_to_journald = false
	I0813 20:26:15.122081    4908 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0813 20:26:15.122085    4908 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0813 20:26:15.122091    4908 command_runner.go:124] > # Path to directory for container attach sockets.
	I0813 20:26:15.122099    4908 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0813 20:26:15.122104    4908 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0813 20:26:15.122108    4908 command_runner.go:124] > bind_mount_prefix = ""
	I0813 20:26:15.122114    4908 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0813 20:26:15.122119    4908 command_runner.go:124] > read_only = false
	I0813 20:26:15.122125    4908 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0813 20:26:15.122132    4908 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0813 20:26:15.122137    4908 command_runner.go:124] > # live configuration reload.
	I0813 20:26:15.122141    4908 command_runner.go:124] > log_level = "info"
	I0813 20:26:15.122147    4908 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0813 20:26:15.122153    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:26:15.122157    4908 command_runner.go:124] > log_filter = ""
	I0813 20:26:15.122165    4908 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0813 20:26:15.122172    4908 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0813 20:26:15.122176    4908 command_runner.go:124] > # separated by comma.
	I0813 20:26:15.122182    4908 command_runner.go:124] > uid_mappings = ""
	I0813 20:26:15.122188    4908 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0813 20:26:15.122194    4908 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0813 20:26:15.122198    4908 command_runner.go:124] > # separated by comma.
	I0813 20:26:15.122202    4908 command_runner.go:124] > gid_mappings = ""
	I0813 20:26:15.122208    4908 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0813 20:26:15.122216    4908 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0813 20:26:15.122221    4908 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0813 20:26:15.122229    4908 command_runner.go:124] > ctr_stop_timeout = 30
	I0813 20:26:15.122238    4908 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0813 20:26:15.122247    4908 command_runner.go:124] > # and manage their lifecycle.
	I0813 20:26:15.122257    4908 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0813 20:26:15.122266    4908 command_runner.go:124] > manage_ns_lifecycle = true
	I0813 20:26:15.122272    4908 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0813 20:26:15.122279    4908 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0813 20:26:15.122284    4908 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0813 20:26:15.122290    4908 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0813 20:26:15.122294    4908 command_runner.go:124] > drop_infra_ctr = false
	I0813 20:26:15.122301    4908 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0813 20:26:15.122307    4908 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0813 20:26:15.122315    4908 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0813 20:26:15.122322    4908 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0813 20:26:15.122328    4908 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0813 20:26:15.122333    4908 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0813 20:26:15.122339    4908 command_runner.go:124] > namespaces_dir = "/var/run"
	I0813 20:26:15.122346    4908 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0813 20:26:15.122351    4908 command_runner.go:124] > pinns_path = "/usr/bin/pinns"
	I0813 20:26:15.122357    4908 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0813 20:26:15.122364    4908 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0813 20:26:15.122371    4908 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0813 20:26:15.122375    4908 command_runner.go:124] > default_runtime = "runc"
	I0813 20:26:15.122381    4908 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0813 20:26:15.122389    4908 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0813 20:26:15.122396    4908 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0813 20:26:15.122403    4908 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0813 20:26:15.122406    4908 command_runner.go:124] > #
	I0813 20:26:15.122411    4908 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0813 20:26:15.122418    4908 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0813 20:26:15.122421    4908 command_runner.go:124] > #  runtime_type = "oci"
	I0813 20:26:15.122426    4908 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0813 20:26:15.122431    4908 command_runner.go:124] > #  privileged_without_host_devices = false
	I0813 20:26:15.122435    4908 command_runner.go:124] > #  allowed_annotations = []
	I0813 20:26:15.122438    4908 command_runner.go:124] > # Where:
	I0813 20:26:15.122444    4908 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0813 20:26:15.122452    4908 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0813 20:26:15.122458    4908 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0813 20:26:15.122466    4908 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0813 20:26:15.122470    4908 command_runner.go:124] > #   in $PATH.
	I0813 20:26:15.122476    4908 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0813 20:26:15.122482    4908 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0813 20:26:15.122488    4908 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0813 20:26:15.122492    4908 command_runner.go:124] > #   state.
	I0813 20:26:15.122498    4908 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0813 20:26:15.122504    4908 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0813 20:26:15.122511    4908 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0813 20:26:15.122545    4908 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0813 20:26:15.122556    4908 command_runner.go:124] > #   The currently recognized values are:
	I0813 20:26:15.122573    4908 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0813 20:26:15.122582    4908 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0813 20:26:15.122589    4908 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0813 20:26:15.122594    4908 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0813 20:26:15.122599    4908 command_runner.go:124] > runtime_path = "/usr/bin/runc"
	I0813 20:26:15.122603    4908 command_runner.go:124] > runtime_type = "oci"
	I0813 20:26:15.122607    4908 command_runner.go:124] > runtime_root = "/run/runc"
	I0813 20:26:15.122614    4908 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0813 20:26:15.122619    4908 command_runner.go:124] > # running containers
	I0813 20:26:15.122623    4908 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0813 20:26:15.122630    4908 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0813 20:26:15.122637    4908 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0813 20:26:15.122643    4908 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0813 20:26:15.122650    4908 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0813 20:26:15.122654    4908 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0813 20:26:15.122659    4908 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0813 20:26:15.122663    4908 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0813 20:26:15.122668    4908 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0813 20:26:15.122674    4908 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0813 20:26:15.122681    4908 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0813 20:26:15.122684    4908 command_runner.go:124] > #
	I0813 20:26:15.122690    4908 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0813 20:26:15.122697    4908 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0813 20:26:15.122703    4908 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0813 20:26:15.122711    4908 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0813 20:26:15.122717    4908 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0813 20:26:15.122721    4908 command_runner.go:124] > [crio.image]
	I0813 20:26:15.122727    4908 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0813 20:26:15.122732    4908 command_runner.go:124] > default_transport = "docker://"
	I0813 20:26:15.122738    4908 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0813 20:26:15.122745    4908 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0813 20:26:15.122749    4908 command_runner.go:124] > global_auth_file = ""
	I0813 20:26:15.122754    4908 command_runner.go:124] > # The image used to instantiate infra containers.
	I0813 20:26:15.122760    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:26:15.122765    4908 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0813 20:26:15.122771    4908 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0813 20:26:15.122778    4908 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0813 20:26:15.122783    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:26:15.122788    4908 command_runner.go:124] > pause_image_auth_file = ""
	I0813 20:26:15.122801    4908 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0813 20:26:15.122810    4908 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0813 20:26:15.122818    4908 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0813 20:26:15.122825    4908 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0813 20:26:15.122829    4908 command_runner.go:124] > pause_command = "/pause"
	I0813 20:26:15.122836    4908 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0813 20:26:15.122843    4908 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0813 20:26:15.122862    4908 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0813 20:26:15.122872    4908 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0813 20:26:15.122877    4908 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0813 20:26:15.122881    4908 command_runner.go:124] > signature_policy = ""
	I0813 20:26:15.122888    4908 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0813 20:26:15.122899    4908 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0813 20:26:15.122903    4908 command_runner.go:124] > # changing them here.
	I0813 20:26:15.122907    4908 command_runner.go:124] > #insecure_registries = "[]"
	I0813 20:26:15.122914    4908 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0813 20:26:15.122920    4908 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0813 20:26:15.122924    4908 command_runner.go:124] > image_volumes = "mkdir"
	I0813 20:26:15.122930    4908 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0813 20:26:15.122937    4908 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0813 20:26:15.122943    4908 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0813 20:26:15.122950    4908 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0813 20:26:15.122954    4908 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0813 20:26:15.122958    4908 command_runner.go:124] > #registries = [
	I0813 20:26:15.122962    4908 command_runner.go:124] > # 	"docker.io",
	I0813 20:26:15.122965    4908 command_runner.go:124] > #]
	I0813 20:26:15.122970    4908 command_runner.go:124] > # Temporary directory to use for storing big files
	I0813 20:26:15.122975    4908 command_runner.go:124] > big_files_temporary_dir = ""
	I0813 20:26:15.122981    4908 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0813 20:26:15.122985    4908 command_runner.go:124] > # CNI plugins.
	I0813 20:26:15.122991    4908 command_runner.go:124] > [crio.network]
	I0813 20:26:15.122997    4908 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0813 20:26:15.123002    4908 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0813 20:26:15.123007    4908 command_runner.go:124] > # cni_default_network = "kindnet"
	I0813 20:26:15.123012    4908 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0813 20:26:15.123018    4908 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0813 20:26:15.123024    4908 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0813 20:26:15.123044    4908 command_runner.go:124] > plugin_dirs = [
	I0813 20:26:15.123054    4908 command_runner.go:124] > 	"/opt/cni/bin/",
	I0813 20:26:15.123067    4908 command_runner.go:124] > ]
	I0813 20:26:15.123077    4908 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0813 20:26:15.123081    4908 command_runner.go:124] > [crio.metrics]
	I0813 20:26:15.123088    4908 command_runner.go:124] > # Globally enable or disable metrics support.
	I0813 20:26:15.123092    4908 command_runner.go:124] > enable_metrics = true
	I0813 20:26:15.123099    4908 command_runner.go:124] > # The port on which the metrics server will listen.
	I0813 20:26:15.123106    4908 command_runner.go:124] > metrics_port = 9090
	I0813 20:26:15.123150    4908 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0813 20:26:15.123161    4908 command_runner.go:124] > metrics_socket = ""
	I0813 20:26:15.123227    4908 command_runner.go:124] ! time="2021-08-13T20:26:15Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 20:26:15.123252    4908 command_runner.go:124] ! time="2021-08-13T20:26:15Z" level=warning msg="The 'registries' option in crio.conf(5) (referenced in \"/etc/crio/crio.conf\") has been deprecated and will be removed with CRI-O 1.21."
	I0813 20:26:15.123265    4908 command_runner.go:124] ! time="2021-08-13T20:26:15Z" level=warning msg="Please refer to containers-registries.conf(5) for configuring unqualified-search registries."
	I0813 20:26:15.123291    4908 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0813 20:26:15.123417    4908 cni.go:93] Creating CNI manager for ""
	I0813 20:26:15.123433    4908 cni.go:154] 2 nodes found, recommending kindnet
	I0813 20:26:15.123443    4908 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:26:15.123461    4908 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210813202419-30853 NodeName:multinode-20210813202419-30853-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.3 CgroupDriver:systemd ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:26:15.123626    4908 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210813202419-30853-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:26:15.123708    4908 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=multinode-20210813202419-30853-m02 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813202419-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:26:15.123770    4908 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:26:15.130951    4908 command_runner.go:124] > kubeadm
	I0813 20:26:15.130972    4908 command_runner.go:124] > kubectl
	I0813 20:26:15.130977    4908 command_runner.go:124] > kubelet
	I0813 20:26:15.130992    4908 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:26:15.131033    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0813 20:26:15.138298    4908 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (513 bytes)
	I0813 20:26:15.151957    4908 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:26:15.163904    4908 ssh_runner.go:149] Run: grep 192.168.39.64	control-plane.minikube.internal$ /etc/hosts
	I0813 20:26:15.168042    4908 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:26:15.178271    4908 host.go:66] Checking if "multinode-20210813202419-30853" exists ...
	I0813 20:26:15.178533    4908 config.go:177] Loaded profile config "multinode-20210813202419-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:26:15.178717    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:26:15.178763    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:26:15.189761    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46303
	I0813 20:26:15.190179    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:26:15.190660    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:26:15.190681    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:26:15.191029    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:26:15.191227    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:26:15.191376    4908 start.go:241] JoinCluster: &{Name:multinode-20210813202419-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 C
lusterName:multinode-20210813202419-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 20:26:15.191460    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm token create --print-join-command --ttl=0"
	I0813 20:26:15.191480    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:26:15.197254    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:26:15.197612    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:26:15.197640    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:26:15.197794    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:26:15.197950    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:26:15.198078    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:26:15.198178    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:26:15.410711    4908 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token vnwbzt.8w4iune7lflm0jcs --discovery-token-ca-cert-hash sha256:00d93bc1122e8abafdd2223d172c3617c6ca5e75fcbdac147810f69b6f47ae9b 
	I0813 20:26:15.410967    4908 start.go:262] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0813 20:26:15.411008    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token vnwbzt.8w4iune7lflm0jcs --discovery-token-ca-cert-hash sha256:00d93bc1122e8abafdd2223d172c3617c6ca5e75fcbdac147810f69b6f47ae9b --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210813202419-30853-m02"
	I0813 20:26:15.573144    4908 command_runner.go:124] > [preflight] Running pre-flight checks
	I0813 20:26:15.892346    4908 command_runner.go:124] > [preflight] Reading configuration from the cluster...
	I0813 20:26:15.892373    4908 command_runner.go:124] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0813 20:26:15.937758    4908 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 20:26:15.938171    4908 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 20:26:15.938206    4908 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0813 20:26:16.111727    4908 command_runner.go:124] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0813 20:26:22.209577    4908 command_runner.go:124] > This node has joined the cluster:
	I0813 20:26:22.209610    4908 command_runner.go:124] > * Certificate signing request was sent to apiserver and a response was received.
	I0813 20:26:22.209620    4908 command_runner.go:124] > * The Kubelet was informed of the new secure connection details.
	I0813 20:26:22.209631    4908 command_runner.go:124] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0813 20:26:22.211465    4908 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 20:26:22.211595    4908 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token vnwbzt.8w4iune7lflm0jcs --discovery-token-ca-cert-hash sha256:00d93bc1122e8abafdd2223d172c3617c6ca5e75fcbdac147810f69b6f47ae9b --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210813202419-30853-m02": (6.800564814s)
	I0813 20:26:22.211633    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0813 20:26:22.524241    4908 command_runner.go:124] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0813 20:26:22.524278    4908 start.go:243] JoinCluster complete in 7.332902101s
	I0813 20:26:22.524289    4908 cni.go:93] Creating CNI manager for ""
	I0813 20:26:22.524294    4908 cni.go:154] 2 nodes found, recommending kindnet
	I0813 20:26:22.524350    4908 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:26:22.530265    4908 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0813 20:26:22.530295    4908 command_runner.go:124] >   Size: 2853400   	Blocks: 5576       IO Block: 4096   regular file
	I0813 20:26:22.530304    4908 command_runner.go:124] > Device: 10h/16d	Inode: 22875       Links: 1
	I0813 20:26:22.530314    4908 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 20:26:22.530322    4908 command_runner.go:124] > Access: 2021-08-13 20:24:33.325091489 +0000
	I0813 20:26:22.530335    4908 command_runner.go:124] > Modify: 2021-08-10 20:02:08.000000000 +0000
	I0813 20:26:22.530347    4908 command_runner.go:124] > Change: 2021-08-13 20:24:29.381091489 +0000
	I0813 20:26:22.530358    4908 command_runner.go:124] >  Birth: -
	I0813 20:26:22.530415    4908 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:26:22.530428    4908 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:26:22.544043    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:26:22.874980    4908 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0813 20:26:22.878379    4908 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0813 20:26:22.882142    4908 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0813 20:26:22.896367    4908 command_runner.go:124] > daemonset.apps/kindnet configured
	I0813 20:26:22.899822    4908 start.go:226] Will wait 6m0s for node &{Name:m02 IP:192.168.39.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0813 20:26:22.901777    4908 out.go:177] * Verifying Kubernetes components...
	I0813 20:26:22.901873    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:26:22.912851    4908 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:26:22.913078    4908 kapi.go:59] client config for multinode-20210813202419-30853: &rest.Config{Host:"https://192.168.39.64:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-308
53/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:26:22.914333    4908 node_ready.go:35] waiting up to 6m0s for node "multinode-20210813202419-30853-m02" to be "Ready" ...
	I0813 20:26:22.914424    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:22.914440    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:22.914447    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:22.914457    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:22.919749    4908 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 20:26:22.919763    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:22.919767    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:22.919771    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:22.919775    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:22.919780    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:22 GMT
	I0813 20:26:22.919785    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:22.920063    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"570","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5415 chars]
	I0813 20:26:23.421063    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:23.421087    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:23.421092    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:23.421097    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:23.424204    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:23.424227    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:23.424234    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:23.424240    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:23.424245    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:23.424249    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:23 GMT
	I0813 20:26:23.424253    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:23.424836    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"570","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5415 chars]
	I0813 20:26:23.921471    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:23.921499    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:23.921508    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:23.921514    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:23.925796    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:26:23.925817    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:23.925824    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:23.925829    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:23.925833    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:23.925838    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:23.925843    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:23 GMT
	I0813 20:26:23.926144    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"570","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5415 chars]
	I0813 20:26:24.421362    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:24.421386    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:24.421392    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:24.421397    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:24.424093    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:24.424117    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:24.424125    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:24.424130    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:24 GMT
	I0813 20:26:24.424135    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:24.424140    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:24.424145    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:24.424475    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:24.920695    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:24.920732    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:24.920739    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:24.920745    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:24.926187    4908 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 20:26:24.926220    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:24.926228    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:24.926233    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:24.926242    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:24.926247    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:24.926252    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:24 GMT
	I0813 20:26:24.926862    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:24.927137    4908 node_ready.go:58] node "multinode-20210813202419-30853-m02" has status "Ready":"False"
	I0813 20:26:25.421491    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:25.421518    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:25.421526    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:25.421532    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:25.425220    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:25.425244    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:25.425250    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:25 GMT
	I0813 20:26:25.425258    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:25.425263    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:25.425268    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:25.425272    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:25.426053    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:25.920691    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:25.920717    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:25.920723    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:25.920728    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:25.923957    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:25.923979    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:25.923984    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:25.923993    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:25.923998    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:25.924005    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:25.924013    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:25 GMT
	I0813 20:26:25.924114    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:26.420612    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:26.420637    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:26.420642    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:26.420646    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:26.424486    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:26.424507    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:26.424512    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:26.424515    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:26.424518    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:26 GMT
	I0813 20:26:26.424521    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:26.424524    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:26.424831    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:26.920469    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:26.920501    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:26.920507    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:26.920511    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:26.923901    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:26.923922    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:26.923929    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:26.923933    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:26.923937    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:26 GMT
	I0813 20:26:26.923942    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:26.923946    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:26.924495    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:27.421253    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:27.421284    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:27.421292    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:27.421298    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:27.425321    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:26:27.425343    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:27.425349    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:27.425352    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:27.425358    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:27.425362    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:27 GMT
	I0813 20:26:27.425365    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:27.426034    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:27.426321    4908 node_ready.go:58] node "multinode-20210813202419-30853-m02" has status "Ready":"False"
	I0813 20:26:27.920529    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:27.920552    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:27.920558    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:27.920562    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:27.924134    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:27.924155    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:27.924160    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:27.924164    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:27.924167    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:27.924170    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:27.924173    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:27 GMT
	I0813 20:26:27.924259    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:28.420575    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:28.420599    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:28.420605    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:28.420610    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:28.424172    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:28.424195    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:28.424202    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:28.424207    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:28 GMT
	I0813 20:26:28.424211    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:28.424215    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:28.424220    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:28.424621    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:28.921346    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:28.921372    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:28.921378    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:28.921382    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:28.927004    4908 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 20:26:28.927025    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:28.927030    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:28.927034    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:28.927038    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:28.927042    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:28.927052    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:28 GMT
	I0813 20:26:28.927765    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:29.421216    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:29.421239    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:29.421245    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:29.421249    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:29.425370    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:26:29.425391    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:29.425397    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:29 GMT
	I0813 20:26:29.425401    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:29.425406    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:29.425410    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:29.425414    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:29.425976    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:29.920964    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:29.920987    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:29.920993    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:29.920997    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:29.924148    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:29.924171    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:29.924178    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:29.924190    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:29.924194    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:29.924199    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:29.924203    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:29 GMT
	I0813 20:26:29.924407    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:29.924739    4908 node_ready.go:58] node "multinode-20210813202419-30853-m02" has status "Ready":"False"
	I0813 20:26:30.420436    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:30.420459    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:30.420471    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:30.420475    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:30.423266    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:30.423283    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:30.423288    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:30.423291    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:30.423294    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:30.423297    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:30.423300    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:30 GMT
	I0813 20:26:30.423978    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:30.920633    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:30.920659    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:30.920665    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:30.920669    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:30.922917    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:30.922938    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:30.922944    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:30.922949    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:30.922953    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:30.922961    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:30.922965    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:30 GMT
	I0813 20:26:30.923119    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:31.420757    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:31.420787    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:31.420795    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:31.420801    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:31.425211    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:26:31.425231    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:31.425237    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:31.425242    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:31.425247    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:31.425251    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:31.425255    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:31 GMT
	I0813 20:26:31.425526    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:31.921229    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:31.921256    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:31.921262    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:31.921266    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:31.926039    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:26:31.926062    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:31.926068    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:31 GMT
	I0813 20:26:31.926073    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:31.926076    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:31.926081    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:31.926085    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:31.926692    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:31.926956    4908 node_ready.go:58] node "multinode-20210813202419-30853-m02" has status "Ready":"False"
	I0813 20:26:32.421422    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:32.421452    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.421458    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.421463    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.425000    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:32.425023    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.425030    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.425035    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.425039    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.425046    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.425050    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.425149    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"596","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{" [truncated 5717 chars]
	I0813 20:26:32.425447    4908 node_ready.go:49] node "multinode-20210813202419-30853-m02" has status "Ready":"True"
	I0813 20:26:32.425469    4908 node_ready.go:38] duration metric: took 9.511109901s waiting for node "multinode-20210813202419-30853-m02" to be "Ready" ...
	I0813 20:26:32.425520    4908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:26:32.425603    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods
	I0813 20:26:32.425615    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.425622    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.425628    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.431919    4908 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0813 20:26:32.431940    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.431946    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.431951    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.431955    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.431960    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.431972    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.433683    4908 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"597"},"items":[{"metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"499","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 66665 chars]
	I0813 20:26:32.435385    4908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.435459    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:26:32.435468    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.435473    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.435477    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.437998    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:32.438011    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.438015    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.438020    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.438024    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.438029    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.438033    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.438439    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"499","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5733 chars]
	I0813 20:26:32.438820    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:32.438837    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.438844    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.438864    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.440694    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:32.440711    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.440716    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.440721    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.440725    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.440729    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.440738    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.441051    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:26:32.441321    4908 pod_ready.go:92] pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:32.441334    4908 pod_ready.go:81] duration metric: took 5.922196ms waiting for pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.441342    4908 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.441387    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210813202419-30853
	I0813 20:26:32.441395    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.441399    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.441403    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.443185    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:32.443197    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.443201    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.443205    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.443208    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.443211    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.443214    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.443414    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210813202419-30853","namespace":"kube-system","uid":"0e8c51de-4800-4c2d-af81-4f4f197d3cd5","resourceVersion":"491","creationTimestamp":"2021-08-13T20:25:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.64:2379","kubernetes.io/config.hash":"b2e5f07a9c29a3554b1f5628928cde4b","kubernetes.io/config.mirror":"b2e5f07a9c29a3554b1f5628928cde4b","kubernetes.io/config.seen":"2021-08-13T20:25:00.776305134Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5569 chars]
	I0813 20:26:32.443652    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:32.443663    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.443668    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.443672    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.445525    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:32.445541    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.445547    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.445551    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.445555    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.445560    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.445564    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.445981    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:26:32.446268    4908 pod_ready.go:92] pod "etcd-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:32.446284    4908 pod_ready.go:81] duration metric: took 4.935192ms waiting for pod "etcd-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.446300    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.446359    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210813202419-30853
	I0813 20:26:32.446370    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.446376    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.446385    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.450385    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:32.450403    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.450409    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.450414    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.450418    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.450422    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.450426    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.451077    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210813202419-30853","namespace":"kube-system","uid":"53b6207c-cf99-4cb1-b237-0e69df65538b","resourceVersion":"478","creationTimestamp":"2021-08-13T20:25:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.64:8443","kubernetes.io/config.hash":"914dc216865e390473fe61a3bb624cd9","kubernetes.io/config.mirror":"914dc216865e390473fe61a3bb624cd9","kubernetes.io/config.seen":"2021-08-13T20:25:00.776307664Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 7249 chars]
	I0813 20:26:32.451437    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:32.451456    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.451463    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.451469    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.455111    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:32.455124    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.455128    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.455132    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.455135    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.455138    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.455141    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.455868    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:26:32.456073    4908 pod_ready.go:92] pod "kube-apiserver-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:32.456084    4908 pod_ready.go:81] duration metric: took 9.770345ms waiting for pod "kube-apiserver-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.456094    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.456153    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210813202419-30853
	I0813 20:26:32.456164    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.456170    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.456176    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.459155    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:32.459170    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.459176    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.459181    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.459185    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.459190    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.459194    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.459594    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210813202419-30853","namespace":"kube-system","uid":"f1752bba-a132-4093-8ff3-ad48483d468b","resourceVersion":"475","creationTimestamp":"2021-08-13T20:25:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a2845623a5b448da54677ebde58b73a6","kubernetes.io/config.mirror":"a2845623a5b448da54677ebde58b73a6","kubernetes.io/config.seen":"2021-08-13T20:25:00.776309845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi
g.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.s [truncated 6810 chars]
	I0813 20:26:32.459934    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:32.459952    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.459957    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.459963    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.463301    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:32.463316    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.463322    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.463326    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.463330    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.463334    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.463339    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.464040    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:26:32.464345    4908 pod_ready.go:92] pod "kube-controller-manager-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:32.464364    4908 pod_ready.go:81] duration metric: took 8.25283ms waiting for pod "kube-controller-manager-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.464375    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8vgbg" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.621762    4908 request.go:600] Waited for 157.330254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8vgbg
	I0813 20:26:32.621827    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8vgbg
	I0813 20:26:32.621833    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.621839    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.621843    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.629534    4908 round_trippers.go:457] Response Status: 200 OK in 7 milliseconds
	I0813 20:26:32.629560    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.629568    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.629574    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.629579    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.629584    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.629590    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.630528    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8vgbg","generateName":"kube-proxy-","namespace":"kube-system","uid":"c0eacea5-4ed3-4d69-bb88-ffb1496d2245","resourceVersion":"582","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb4b18d1-5cff-490a-b573-900487c4d9e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb4b18d1-5cff-490a-b573-900487c4d9e7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5762 chars]
	I0813 20:26:32.822400    4908 request.go:600] Waited for 191.401336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:32.822476    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:32.822484    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.822492    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.822499    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.827976    4908 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 20:26:32.827998    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.828004    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.828009    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.828014    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.828018    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.828023    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.828529    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"596","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{" [truncated 5717 chars]
	I0813 20:26:32.828861    4908 pod_ready.go:92] pod "kube-proxy-8vgbg" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:32.828879    4908 pod_ready.go:81] duration metric: took 364.496122ms waiting for pod "kube-proxy-8vgbg" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.828891    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rb42p" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:33.022314    4908 request.go:600] Waited for 193.352212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rb42p
	I0813 20:26:33.022410    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rb42p
	I0813 20:26:33.022423    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:33.022431    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:33.022438    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:33.025497    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:33.025520    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:33.025526    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:33.025531    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:33.025536    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:33.025540    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:33.025546    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:33 GMT
	I0813 20:26:33.025703    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rb42p","generateName":"kube-proxy-","namespace":"kube-system","uid":"5633ede2-5578-4565-97af-b83cf1b25f0d","resourceVersion":"459","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb4b18d1-5cff-490a-b573-900487c4d9e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb4b18d1-5cff-490a-b573-900487c4d9e7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5758 chars]
	I0813 20:26:33.222506    4908 request.go:600] Waited for 196.354324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:33.222577    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:33.222592    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:33.222599    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:33.222606    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:33.225189    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:33.225203    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:33.225210    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:33.225214    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:33.225217    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:33 GMT
	I0813 20:26:33.225219    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:33.225222    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:33.225367    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:26:33.225616    4908 pod_ready.go:92] pod "kube-proxy-rb42p" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:33.225629    4908 pod_ready.go:81] duration metric: took 396.730944ms waiting for pod "kube-proxy-rb42p" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:33.225639    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:33.422086    4908 request.go:600] Waited for 196.369976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813202419-30853
	I0813 20:26:33.422145    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813202419-30853
	I0813 20:26:33.422151    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:33.422156    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:33.422161    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:33.425701    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:33.425727    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:33.425734    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:33.425739    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:33.425743    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:33.425747    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:33.425751    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:33 GMT
	I0813 20:26:33.426064    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210813202419-30853","namespace":"kube-system","uid":"ed906c56-f110-4e49-aa1c-5e0e0b8cb88c","resourceVersion":"384","creationTimestamp":"2021-08-13T20:25:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e846b027c41f0882917076be3af95ba2","kubernetes.io/config.mirror":"e846b027c41f0882917076be3af95ba2","kubernetes.io/config.seen":"2021-08-13T20:25:00.776286387Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:ku
bernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labe [truncated 4540 chars]
	I0813 20:26:33.621694    4908 request.go:600] Waited for 195.239627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:33.621766    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:33.621775    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:33.621784    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:33.621790    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:33.625205    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:33.625224    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:33.625229    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:33.625233    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:33.625236    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:33.625239    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:33.625242    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:33 GMT
	I0813 20:26:33.625559    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:26:33.625882    4908 pod_ready.go:92] pod "kube-scheduler-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:33.625897    4908 pod_ready.go:81] duration metric: took 400.25015ms waiting for pod "kube-scheduler-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:33.625910    4908 pod_ready.go:38] duration metric: took 1.20037363s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:26:33.625929    4908 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:26:33.626000    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:26:33.637129    4908 system_svc.go:56] duration metric: took 11.192763ms WaitForService to wait for kubelet.
	I0813 20:26:33.637152    4908 kubeadm.go:547] duration metric: took 10.737295864s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:26:33.637180    4908 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:26:33.821532    4908 request.go:600] Waited for 184.263745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/nodes
	I0813 20:26:33.821600    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes
	I0813 20:26:33.821608    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:33.821615    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:33.821622    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:33.824485    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:33.824506    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:33.824513    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:33 GMT
	I0813 20:26:33.824517    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:33.824521    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:33.824525    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:33.824529    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:33.825028    4908 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"598"},"items":[{"metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed
-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operatio [truncated 13315 chars]
	I0813 20:26:33.825422    4908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:26:33.825442    4908 node_conditions.go:123] node cpu capacity is 2
	I0813 20:26:33.825454    4908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:26:33.825458    4908 node_conditions.go:123] node cpu capacity is 2
	I0813 20:26:33.825462    4908 node_conditions.go:105] duration metric: took 188.277523ms to run NodePressure ...
	I0813 20:26:33.825472    4908 start.go:231] waiting for startup goroutines ...
	I0813 20:26:33.868409    4908 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:26:33.871186    4908 out.go:177] * Done! kubectl is now configured to use "multinode-20210813202419-30853" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:24:30 UTC, end at Fri 2021-08-13 20:29:43 UTC. --
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.103771725Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&PodSandboxMetadata{Name:busybox-84b6686758-nfr5z,Uid:25a75dcb-606f-4b7d-8767-8d6e54d476b1,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628886395194371601,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,pod-template-hash: 84b6686758,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-13T20:26:34.795942890Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7839155d-5552-45cb-ab31-a243fd82f32e,Namespace:kube-system
,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628886328939215197,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\
":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2021-08-13T20:25:28.212561306Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&PodSandboxMetadata{Name:coredns-558bd4d5db-58k2l,Uid:0431b736-8284-40c7-9bc4-fcc968e4c41b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628886326624577367,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,k8s-app: kube-dns,pod-template-hash: 558bd4d5db,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-13T20:25:25.310339049Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&PodSandboxMetadata{Name:kindnet-hc4k2,Uid:8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,Namespace:ku
be-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628886326201514838,Labels:map[string]string{app: kindnet,controller-revision-hash: 694b6fb659,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-13T20:25:25.186898188Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&PodSandboxMetadata{Name:kube-proxy-rb42p,Uid:5633ede2-5578-4565-97af-b83cf1b25f0d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628886325975288299,Labels:map[string]string{controller-revision-hash: 7cdcb64568,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf1b25f0d,k8s-app: kub
e-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-13T20:25:25.215609945Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-20210813202419-30853,Uid:e846b027c41f0882917076be3af95ba2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628886302632631941,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af95ba2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e846b027c41f0882917076be3af95ba2,kubernetes.io/config.seen: 2021-08-13T20:25:00.776286387Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:
&PodSandboxMetadata{Name:kube-controller-manager-multinode-20210813202419-30853,Uid:a2845623a5b448da54677ebde58b73a6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628886302628290307,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2845623a5b448da54677ebde58b73a6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a2845623a5b448da54677ebde58b73a6,kubernetes.io/config.seen: 2021-08-13T20:25:00.776309845Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-20210813202419-30853,Uid:914dc216865e390473fe61a3bb624cd9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628886302599601303,Labels:map[string]string{component: kube-apiserv
er,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.64:8443,kubernetes.io/config.hash: 914dc216865e390473fe61a3bb624cd9,kubernetes.io/config.seen: 2021-08-13T20:25:00.776307664Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&PodSandboxMetadata{Name:etcd-multinode-20210813202419-30853,Uid:b2e5f07a9c29a3554b1f5628928cde4b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628886302550403959,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,tier: con
trol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.64:2379,kubernetes.io/config.hash: b2e5f07a9c29a3554b1f5628928cde4b,kubernetes.io/config.seen: 2021-08-13T20:25:00.776305134Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=6ab97357-42e0-4982-978d-5ad070153c7a name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.105160192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=43366732-418a-4c51-8a82-9e93d44070db name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.105212207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=43366732-418a-4c51-8a82-9e93d44070db name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.105398047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=43366732-418a-4c51-8a82-9e93d44070db name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.123893116Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f01630d6-c304-42bc-91c3-bed7da0a3103 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.123942835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f01630d6-c304-42bc-91c3-bed7da0a3103 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.124259156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f01630d6-c304-42bc-91c3-bed7da0a3103 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.166911631Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1e8c1b33-577a-4507-a53d-b1282dec62ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.167067829Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1e8c1b33-577a-4507-a53d-b1282dec62ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.167409526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1e8c1b33-577a-4507-a53d-b1282dec62ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.204518006Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=03b771e0-57d3-4e47-87c8-01675499f214 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.204657867Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=03b771e0-57d3-4e47-87c8-01675499f214 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.204862166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=03b771e0-57d3-4e47-87c8-01675499f214 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.242740262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e6b66dee-d20e-44d8-ba21-37c091988cf9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.242799822Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e6b66dee-d20e-44d8-ba21-37c091988cf9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.243391138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e6b66dee-d20e-44d8-ba21-37c091988cf9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.281246452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e84a1bcd-9681-4770-8cea-643494261384 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.281306402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e84a1bcd-9681-4770-8cea-643494261384 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.281481908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e84a1bcd-9681-4770-8cea-643494261384 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.314622937Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fc2ca274-dded-4a68-9a9e-93e3bb59015c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.314683117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fc2ca274-dded-4a68-9a9e-93e3bb59015c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.314873457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fc2ca274-dded-4a68-9a9e-93e3bb59015c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.348624758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dc3d7925-aae2-4010-819d-b7ee8f05bf67 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.348754482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dc3d7925-aae2-4010-819d-b7ee8f05bf67 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:29:43 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:29:43.348956912Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dc3d7925-aae2-4010-819d-b7ee8f05bf67 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID
	bda85b31fadfe       docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47   3 minutes ago       Running             busybox                   0                   d03383b45e258
	61e562ffee5c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    4 minutes ago       Running             storage-provisioner       0                   a31f0c6333a6f
	5af471efb7740       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                    4 minutes ago       Running             kindnet-cni               0                   5054e726d5910
	374392d2d0eff       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                    4 minutes ago       Running             coredns                   0                   368ae7f59fbb5
	eb6758efc0050       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                    4 minutes ago       Running             kube-proxy                0                   64bd4de6112df
	131d38cbeff7c       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                    4 minutes ago       Running             kube-scheduler            0                   022e7f58532b9
	ea66bd2fc80e5       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                    4 minutes ago       Running             kube-controller-manager   0                   ea3b9756a38fb
	caa0e4513e736       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                    4 minutes ago       Running             etcd                      0                   5456dc5ae342a
	bbb34d9175340       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                    4 minutes ago       Running             kube-apiserver            0                   bd5b31b6d14b0
	
	* 
	* ==> coredns [374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210813202419-30853
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210813202419-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=multinode-20210813202419-30853
	                    minikube.k8s.io/updated_at=2021_08_13T20_25_13_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:25:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210813202419-30853
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:29:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:26:48 +0000   Fri, 13 Aug 2021 20:25:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:26:48 +0000   Fri, 13 Aug 2021 20:25:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:26:48 +0000   Fri, 13 Aug 2021 20:25:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:26:48 +0000   Fri, 13 Aug 2021 20:25:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.64
	  Hostname:    multinode-20210813202419-30853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	System Info:
	  Machine ID:                 023e64e902bf4156830ec265d715a4eb
	  System UUID:                023e64e9-02bf-4156-830e-c265d715a4eb
	  Boot ID:                    9fa698ef-eb72-480d-906a-fb3492960c09
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-nfr5z                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  kube-system                 coredns-558bd4d5db-58k2l                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     4m18s
	  kube-system                 etcd-multinode-20210813202419-30853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m33s
	  kube-system                 kindnet-hc4k2                                             100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m18s
	  kube-system                 kube-apiserver-multinode-20210813202419-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-controller-manager-multinode-20210813202419-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-proxy-rb42p                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-scheduler-multinode-20210813202419-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m25s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m25s  kubelet     Node multinode-20210813202419-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s  kubelet     Node multinode-20210813202419-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s  kubelet     Node multinode-20210813202419-30853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m19s  kubelet     Node multinode-20210813202419-30853 status is now: NodeReady
	  Normal  Starting                 4m16s  kube-proxy  Starting kube-proxy.
	
	
	Name:               multinode-20210813202419-30853-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210813202419-30853-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:26:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210813202419-30853-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:29:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:26:52 +0000   Fri, 13 Aug 2021 20:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:26:52 +0000   Fri, 13 Aug 2021 20:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:26:52 +0000   Fri, 13 Aug 2021 20:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:26:52 +0000   Fri, 13 Aug 2021 20:26:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    multinode-20210813202419-30853-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	System Info:
	  Machine ID:                 c50713fe18cb469fb76b09ab6d47701b
	  System UUID:                c50713fe-18cb-469f-b76b-09ab6d47701b
	  Boot ID:                    62516a66-415a-4559-89c4-46cc8426dd68
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-g7sjs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  kube-system                 kindnet-nhtk5               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m21s
	  kube-system                 kube-proxy-8vgbg            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 3m22s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m21s (x2 over 3m21s)  kubelet     Node multinode-20210813202419-30853-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m21s (x2 over 3m21s)  kubelet     Node multinode-20210813202419-30853-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m21s (x2 over 3m21s)  kubelet     Node multinode-20210813202419-30853-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m21s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 3m18s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m11s                  kubelet     Node multinode-20210813202419-30853-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug13 20:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093885] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.759808] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000104] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.321940] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.032447] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.923684] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1732 comm=systemd-network
	[  +1.581105] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006095] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.106824] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +14.449736] systemd-fstab-generator[2163]: Ignoring "noauto" for root device
	[  +0.138675] systemd-fstab-generator[2176]: Ignoring "noauto" for root device
	[  +0.190151] systemd-fstab-generator[2203]: Ignoring "noauto" for root device
	[  +6.959815] systemd-fstab-generator[2407]: Ignoring "noauto" for root device
	[Aug13 20:25] systemd-fstab-generator[2818]: Ignoring "noauto" for root device
	[ +14.141677] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.365438] kauditd_printk_skb: 146 callbacks suppressed
	[Aug13 20:26] NFSD: Unable to end grace period: -110
	
	* 
	* ==> etcd [caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc] <==
	* 2021-08-13 20:26:14.632612 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-public\" " with result "range_response_count:1 size:351" took too long (3.219515511s) to execute
	2021-08-13 20:26:14.632844 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.751049376s) to execute
	2021-08-13 20:26:14.633269 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.471950894s) to execute
	2021-08-13 20:26:19.182079 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:26:22.135390 W | etcdserver: read-only range request "key:\"/registry/minions/multinode-20210813202419-30853-m02\" " with result "range_response_count:0 size:5" took too long (113.943195ms) to execute
	2021-08-13 20:26:29.186064 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:26:39.182016 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:26:49.181809 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:26:59.182686 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:27:09.184001 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:27:19.185295 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:27:29.182846 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:27:39.182075 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:27:49.182017 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:27:59.181737 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:28:09.181542 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:28:19.181864 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:28:29.182306 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:28:39.182712 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:28:49.181617 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:28:59.181893 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:29:09.182353 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:29:19.181905 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:29:29.181911 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:29:39.182665 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:29:43 up 5 min,  0 users,  load average: 0.46, 0.54, 0.27
	Linux multinode-20210813202419-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23] <==
	* Trace[1158120222]: [3.22497472s] [3.22497472s] END
	I0813 20:26:14.637712       1 trace.go:205] Trace[636807639]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.64,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:26:11.413) (total time: 3224ms):
	Trace[636807639]: ---"Object stored in database" 3224ms (20:26:00.637)
	Trace[636807639]: [3.224138879s] [3.224138879s] END
	I0813 20:26:14.642539       1 trace.go:205] Trace[314514662]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.39.64,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:26:10.515) (total time: 4127ms):
	Trace[314514662]: ---"Listing from storage done" 4121ms (20:26:00.636)
	Trace[314514662]: [4.127317243s] [4.127317243s] END
	I0813 20:26:26.609647       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:26:26.609822       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:26:26.609842       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:26:59.309208       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:26:59.309429       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:26:59.309470       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:27:30.575079       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:27:30.575394       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:27:30.575417       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:28:03.928641       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:28:03.928819       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:28:03.928843       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:28:41.140416       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:28:41.140530       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:28:41.140562       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:29:24.345558       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:29:24.345746       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:29:24.345764       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7] <==
	* I0813 20:25:24.489278       1 shared_informer.go:247] Caches are synced for HPA 
	I0813 20:25:24.504785       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:25:24.511800       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:25:24.579326       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:25:24.978975       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:25:24.979079       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:25:24.996212       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:25:25.176516       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hc4k2"
	I0813 20:25:25.202607       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rb42p"
	I0813 20:25:25.240652       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-nnsgn"
	I0813 20:25:25.283631       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-58k2l"
	I0813 20:25:25.455922       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:25:25.478464       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-nnsgn"
	I0813 20:25:29.350055       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0813 20:26:22.142527       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210813202419-30853-m02" does not exist
	I0813 20:26:22.192570       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-nhtk5"
	I0813 20:26:22.250198       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8vgbg"
	E0813 20:26:22.309916       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"128941b6-4535-4cf8-99f7-12fec9d1ed4e", ResourceVersion:"496", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764483113, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00255fb78), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00255fb90)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00255fba8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00255fbc0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00252d5a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, Creat
ionTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00255fbd8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexV
olumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00255fbf0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVol
umeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSI
VolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00255fc08), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v
1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00252d5c0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00252d600)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amoun
t{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropag
ation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0025a70e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00259d818), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0007712d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil
), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0025be5d0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00259d860)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition
(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0813 20:26:22.321375       1 range_allocator.go:373] Set node multinode-20210813202419-30853-m02 PodCIDR to [10.244.1.0/24]
	I0813 20:26:24.360327       1 event.go:291] "Event occurred" object="multinode-20210813202419-30853-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210813202419-30853-m02 event: Registered Node multinode-20210813202419-30853-m02 in Controller"
	W0813 20:26:24.360580       1 node_lifecycle_controller.go:1013] Missing timestamp for Node multinode-20210813202419-30853-m02. Assuming now as a timestamp.
	I0813 20:26:34.758654       1 event.go:291] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-84b6686758 to 2"
	I0813 20:26:34.772682       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-g7sjs"
	I0813 20:26:34.779365       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-nfr5z"
	
	* 
	* ==> kube-proxy [eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e] <==
	* I0813 20:25:27.511844       1 node.go:172] Successfully retrieved node IP: 192.168.39.64
	I0813 20:25:27.511964       1 server_others.go:140] Detected node IP 192.168.39.64
	W0813 20:25:27.511986       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:25:27.651746       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:25:27.651766       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:25:27.651780       1 server_others.go:212] Using iptables Proxier.
	I0813 20:25:27.652928       1 server.go:643] Version: v1.21.3
	I0813 20:25:27.660952       1 config.go:315] Starting service config controller
	I0813 20:25:27.661041       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:25:27.661061       1 config.go:224] Starting endpoint slice config controller
	I0813 20:25:27.661065       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:25:27.674655       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:25:27.676427       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:25:27.762085       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:25:27.762283       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2] <==
	* E0813 20:25:08.938811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:25:08.938904       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:08.940172       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:25:08.940418       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:08.940472       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:25:08.940518       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:25:08.940562       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:08.940600       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:08.940642       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:25:08.941087       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:25:09.901889       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:09.931424       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:25:10.012721       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:25:10.136085       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:25:10.142736       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:25:10.161741       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:25:10.230611       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:10.234412       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:25:10.262511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:25:10.272680       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:25:10.282797       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:10.313505       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:10.385231       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:25:10.474239       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0813 20:25:12.033299       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:24:30 UTC, end at Fri 2021-08-13 20:29:43 UTC. --
	Aug 13 20:25:25 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:25.407075    2827 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq6xv\" (UniqueName: \"kubernetes.io/projected/13726b49-920a-4c21-9f87-b969630657c6-kube-api-access-bq6xv\") pod \"coredns-558bd4d5db-nnsgn\" (UID: \"13726b49-920a-4c21-9f87-b969630657c6\") "
	Aug 13 20:25:27 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:27.559507    2827 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq6xv\" (UniqueName: \"kubernetes.io/projected/13726b49-920a-4c21-9f87-b969630657c6-kube-api-access-bq6xv\") pod \"13726b49-920a-4c21-9f87-b969630657c6\" (UID: \"13726b49-920a-4c21-9f87-b969630657c6\") "
	Aug 13 20:25:27 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:27.559563    2827 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13726b49-920a-4c21-9f87-b969630657c6-config-volume\") pod \"13726b49-920a-4c21-9f87-b969630657c6\" (UID: \"13726b49-920a-4c21-9f87-b969630657c6\") "
	Aug 13 20:25:27 multinode-20210813202419-30853 kubelet[2827]: W0813 20:25:27.559787    2827 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/13726b49-920a-4c21-9f87-b969630657c6/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 13 20:25:27 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:27.559932    2827 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13726b49-920a-4c21-9f87-b969630657c6-config-volume" (OuterVolumeSpecName: "config-volume") pod "13726b49-920a-4c21-9f87-b969630657c6" (UID: "13726b49-920a-4c21-9f87-b969630657c6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 20:25:27 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:27.572416    2827 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13726b49-920a-4c21-9f87-b969630657c6-kube-api-access-bq6xv" (OuterVolumeSpecName: "kube-api-access-bq6xv") pod "13726b49-920a-4c21-9f87-b969630657c6" (UID: "13726b49-920a-4c21-9f87-b969630657c6"). InnerVolumeSpecName "kube-api-access-bq6xv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 20:25:27 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:27.703823    2827 reconciler.go:319] "Volume detached for volume \"kube-api-access-bq6xv\" (UniqueName: \"kubernetes.io/projected/13726b49-920a-4c21-9f87-b969630657c6-kube-api-access-bq6xv\") on node \"multinode-20210813202419-30853\" DevicePath \"\""
	Aug 13 20:25:27 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:27.703854    2827 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13726b49-920a-4c21-9f87-b969630657c6-config-volume\") on node \"multinode-20210813202419-30853\" DevicePath \"\""
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:28.213056    2827 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:28.312789    2827 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7839155d-5552-45cb-ab31-a243fd82f32e-tmp\") pod \"storage-provisioner\" (UID: \"7839155d-5552-45cb-ab31-a243fd82f32e\") "
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:28.312965    2827 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4srfr\" (UniqueName: \"kubernetes.io/projected/7839155d-5552-45cb-ab31-a243fd82f32e-kube-api-access-4srfr\") pod \"storage-provisioner\" (UID: \"7839155d-5552-45cb-ab31-a243fd82f32e\") "
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/net_cls,net_prio/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/systemd/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/cpu,cpuacct/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/net_cls,net_prio/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/memory/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: E0813 20:25:28.642224    2827 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/8c73e66e-2ec6-4a1b-a7af-3edb2c517f18/etc-hosts with error exit status 1" pod="kube-system/kindnet-hc4k2"
	Aug 13 20:25:29 multinode-20210813202419-30853 kubelet[2827]: E0813 20:25:29.121662    2827 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = open /proc/3411/stat: no such file or directory: container process not found"
	Aug 13 20:25:29 multinode-20210813202419-30853 kubelet[2827]: E0813 20:25:29.121801    2827 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = open /proc/3411/stat: no such file or directory: container process not found" pod="kube-system/coredns-558bd4d5db-nnsgn"
	Aug 13 20:25:30 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:30.653649    2827 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 13 20:26:34 multinode-20210813202419-30853 kubelet[2827]: I0813 20:26:34.796437    2827 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:26:34 multinode-20210813202419-30853 kubelet[2827]: I0813 20:26:34.980987    2827 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qspch\" (UniqueName: \"kubernetes.io/projected/25a75dcb-606f-4b7d-8767-8d6e54d476b1-kube-api-access-qspch\") pod \"busybox-84b6686758-nfr5z\" (UID: \"25a75dcb-606f-4b7d-8767-8d6e54d476b1\") "
	
	* 
	* ==> storage-provisioner [61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a] <==
	* I0813 20:25:29.920583       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:25:29.940741       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:25:29.941291       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:25:29.960266       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:25:29.962081       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf84d585-93d6-4de2-bf54-ae6b01640a94", APIVersion:"v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210813202419-30853_ea188996-5464-4a66-8fe1-fc426d592470 became leader
	I0813 20:25:29.966951       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210813202419-30853_ea188996-5464-4a66-8fe1-fc426d592470!
	I0813 20:25:30.069775       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210813202419-30853_ea188996-5464-4a66-8fe1-fc426d592470!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20210813202419-30853 -n multinode-20210813202419-30853
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-20210813202419-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context multinode-20210813202419-30853 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context multinode-20210813202419-30853 describe pod : exit status 1 (46.011495ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context multinode-20210813202419-30853 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (190.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (63.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-g7sjs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:529: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-g7sjs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (1m0.329749512s)
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-g7sjs -- sh -c "ping -c 1 <nil>"
multinode_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-g7sjs -- sh -c "ping -c 1 <nil>": exit status 2 (258.028719ms)

                                                
                                                
** stderr ** 
	sh: syntax error: unexpected end of file
	command terminated with exit code 2

                                                
                                                
** /stderr **
multinode_test.go:538: Failed to ping host (<nil>) from pod (busybox-84b6686758-g7sjs): exit status 2
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-nfr5z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-nfr5z -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210813202419-30853 -- exec busybox-84b6686758-nfr5z -- sh -c "ping -c 1 192.168.39.1": exit status 1 (251.973424ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:538: Failed to ping host (192.168.39.1) from pod (busybox-84b6686758-nfr5z): exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20210813202419-30853 -n multinode-20210813202419-30853
helpers_test.go:245: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202419-30853 logs -n 25: (1.331860467s)
helpers_test.go:253: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| Command |                              Args                              |                Profile                 |   User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20210813201821-30853 image load                     | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:52 UTC | Fri, 13 Aug 2021 20:21:52 UTC |
	|         | /home/jenkins/workspace/KVM_Linux_crio_integration/busybox.tar |                                        |          |         |                               |                               |
	| ssh     | -p                                                             | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:53 UTC | Fri, 13 Aug 2021 20:21:53 UTC |
	|         | functional-20210813201821-30853                                |                                        |          |         |                               |                               |
	|         | -- sudo crictl images                                          |                                        |          |         |                               |                               |
	| -p      | functional-20210813201821-30853                                | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:54 UTC | Fri, 13 Aug 2021 20:21:54 UTC |
	|         | ssh stat                                                       |                                        |          |         |                               |                               |
	|         | /mount-9p/created-by-test                                      |                                        |          |         |                               |                               |
	| -p      | functional-20210813201821-30853                                | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:54 UTC | Fri, 13 Aug 2021 20:21:54 UTC |
	|         | ssh stat                                                       |                                        |          |         |                               |                               |
	|         | /mount-9p/created-by-pod                                       |                                        |          |         |                               |                               |
	| -p      | functional-20210813201821-30853                                | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:54 UTC | Fri, 13 Aug 2021 20:21:54 UTC |
	|         | ssh sudo umount -f /mount-9p                                   |                                        |          |         |                               |                               |
	| -p      | functional-20210813201821-30853                                | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:55 UTC | Fri, 13 Aug 2021 20:21:55 UTC |
	|         | ssh findmnt -T /mount-9p | grep                                |                                        |          |         |                               |                               |
	|         | 9p                                                             |                                        |          |         |                               |                               |
	| -p      | functional-20210813201821-30853                                | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:21:55 UTC | Fri, 13 Aug 2021 20:21:56 UTC |
	|         | ssh -- ls -la /mount-9p                                        |                                        |          |         |                               |                               |
	| delete  | -p                                                             | functional-20210813201821-30853        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:22:18 UTC | Fri, 13 Aug 2021 20:22:19 UTC |
	|         | functional-20210813201821-30853                                |                                        |          |         |                               |                               |
	| start   | -p                                                             | json-output-20210813202219-30853       | testUser | v1.22.0 | Fri, 13 Aug 2021 20:22:19 UTC | Fri, 13 Aug 2021 20:24:06 UTC |
	|         | json-output-20210813202219-30853                               |                                        |          |         |                               |                               |
	|         | --output=json --user=testUser                                  |                                        |          |         |                               |                               |
	|         | --memory=2200 --wait=true                                      |                                        |          |         |                               |                               |
	|         | --driver=kvm2                                                  |                                        |          |         |                               |                               |
	|         | --container-runtime=crio                                       |                                        |          |         |                               |                               |
	| unpause | -p                                                             | json-output-20210813202219-30853       | testUser | v1.22.0 | Fri, 13 Aug 2021 20:24:09 UTC | Fri, 13 Aug 2021 20:24:09 UTC |
	|         | json-output-20210813202219-30853                               |                                        |          |         |                               |                               |
	|         | --output=json --user=testUser                                  |                                        |          |         |                               |                               |
	| stop    | -p                                                             | json-output-20210813202219-30853       | testUser | v1.22.0 | Fri, 13 Aug 2021 20:24:09 UTC | Fri, 13 Aug 2021 20:24:17 UTC |
	|         | json-output-20210813202219-30853                               |                                        |          |         |                               |                               |
	|         | --output=json --user=testUser                                  |                                        |          |         |                               |                               |
	| delete  | -p                                                             | json-output-20210813202219-30853       | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:24:17 UTC | Fri, 13 Aug 2021 20:24:18 UTC |
	|         | json-output-20210813202219-30853                               |                                        |          |         |                               |                               |
	| delete  | -p                                                             | json-output-error-20210813202418-30853 | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:24:18 UTC | Fri, 13 Aug 2021 20:24:18 UTC |
	|         | json-output-error-20210813202418-30853                         |                                        |          |         |                               |                               |
	| start   | -p                                                             | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:24:19 UTC | Fri, 13 Aug 2021 20:26:33 UTC |
	|         | multinode-20210813202419-30853                                 |                                        |          |         |                               |                               |
	|         | --wait=true --memory=2200                                      |                                        |          |         |                               |                               |
	|         | --nodes=2 -v=8                                                 |                                        |          |         |                               |                               |
	|         | --alsologtostderr                                              |                                        |          |         |                               |                               |
	|         | --driver=kvm2                                                  |                                        |          |         |                               |                               |
	|         | --container-runtime=crio                                       |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202419-30853 -- apply -f                  | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:34 UTC | Fri, 13 Aug 2021 20:26:34 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml              |                                        |          |         |                               |                               |
	| kubectl | -p                                                             | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:34 UTC | Fri, 13 Aug 2021 20:26:40 UTC |
	|         | multinode-20210813202419-30853                                 |                                        |          |         |                               |                               |
	|         | -- rollout status                                              |                                        |          |         |                               |                               |
	|         | deployment/busybox                                             |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202419-30853                              | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:40 UTC | Fri, 13 Aug 2021 20:26:40 UTC |
	|         | -- get pods -o                                                 |                                        |          |         |                               |                               |
	|         | jsonpath='{.items[*].status.podIP}'                            |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202419-30853                              | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:40 UTC | Fri, 13 Aug 2021 20:26:40 UTC |
	|         | -- get pods -o                                                 |                                        |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'                           |                                        |          |         |                               |                               |
	| kubectl | -p                                                             | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:27:40 UTC | Fri, 13 Aug 2021 20:27:41 UTC |
	|         | multinode-20210813202419-30853                                 |                                        |          |         |                               |                               |
	|         | -- exec                                                        |                                        |          |         |                               |                               |
	|         | busybox-84b6686758-nfr5z --                                    |                                        |          |         |                               |                               |
	|         | nslookup kubernetes.io                                         |                                        |          |         |                               |                               |
	| kubectl | -p                                                             | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:28:41 UTC | Fri, 13 Aug 2021 20:28:41 UTC |
	|         | multinode-20210813202419-30853                                 |                                        |          |         |                               |                               |
	|         | -- exec                                                        |                                        |          |         |                               |                               |
	|         | busybox-84b6686758-nfr5z --                                    |                                        |          |         |                               |                               |
	|         | nslookup kubernetes.default                                    |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202419-30853                              | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:29:42 UTC | Fri, 13 Aug 2021 20:29:42 UTC |
	|         | -- exec busybox-84b6686758-nfr5z                               |                                        |          |         |                               |                               |
	|         | -- nslookup                                                    |                                        |          |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local                           |                                        |          |         |                               |                               |
	| -p      | multinode-20210813202419-30853                                 | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:29:42 UTC | Fri, 13 Aug 2021 20:29:43 UTC |
	|         | logs -n 25                                                     |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202419-30853                              | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:29:44 UTC | Fri, 13 Aug 2021 20:29:44 UTC |
	|         | -- get pods -o                                                 |                                        |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'                           |                                        |          |         |                               |                               |
	| kubectl | -p                                                             | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:29:44 UTC | Fri, 13 Aug 2021 20:30:45 UTC |
	|         | multinode-20210813202419-30853                                 |                                        |          |         |                               |                               |
	|         | -- exec                                                        |                                        |          |         |                               |                               |
	|         | busybox-84b6686758-g7sjs                                       |                                        |          |         |                               |                               |
	|         | -- sh -c nslookup                                              |                                        |          |         |                               |                               |
	|         | host.minikube.internal | awk                                   |                                        |          |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                                        |                                        |          |         |                               |                               |
	| kubectl | -p                                                             | multinode-20210813202419-30853         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:30:45 UTC | Fri, 13 Aug 2021 20:30:45 UTC |
	|         | multinode-20210813202419-30853                                 |                                        |          |         |                               |                               |
	|         | -- exec                                                        |                                        |          |         |                               |                               |
	|         | busybox-84b6686758-nfr5z                                       |                                        |          |         |                               |                               |
	|         | -- sh -c nslookup                                              |                                        |          |         |                               |                               |
	|         | host.minikube.internal | awk                                   |                                        |          |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                                        |                                        |          |         |                               |                               |
	|---------|----------------------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:24:19
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:24:19.061564    4908 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:24:19.061854    4908 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:24:19.061864    4908 out.go:311] Setting ErrFile to fd 2...
	I0813 20:24:19.061870    4908 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:24:19.062119    4908 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:24:19.062551    4908 out.go:305] Setting JSON to false
	I0813 20:24:19.097379    4908 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":7621,"bootTime":1628878638,"procs":151,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:24:19.097491    4908 start.go:121] virtualization: kvm guest
	I0813 20:24:19.099820    4908 out.go:177] * [multinode-20210813202419-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:24:19.101377    4908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:24:19.099976    4908 notify.go:169] Checking for updates...
	I0813 20:24:19.102828    4908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:24:19.104123    4908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:24:19.105401    4908 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:24:19.105587    4908 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:24:19.133777    4908 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 20:24:19.133802    4908 start.go:278] selected driver: kvm2
	I0813 20:24:19.133809    4908 start.go:751] validating driver "kvm2" against <nil>
	I0813 20:24:19.133825    4908 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:24:19.134797    4908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:24:19.134995    4908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:24:19.145532    4908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:24:19.145616    4908 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:24:19.145753    4908 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:24:19.145783    4908 cni.go:93] Creating CNI manager for ""
	I0813 20:24:19.145790    4908 cni.go:154] 0 nodes found, recommending kindnet
	I0813 20:24:19.145802    4908 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:24:19.145816    4908 start_flags.go:277] config:
	{Name:multinode-20210813202419-30853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813202419-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 20:24:19.145903    4908 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:24:19.147745    4908 out.go:177] * Starting control plane node multinode-20210813202419-30853 in cluster multinode-20210813202419-30853
	I0813 20:24:19.147764    4908 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:24:19.147788    4908 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:24:19.147833    4908 cache.go:56] Caching tarball of preloaded images
	I0813 20:24:19.147916    4908 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:24:19.147933    4908 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:24:19.148220    4908 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/config.json ...
	I0813 20:24:19.148247    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/config.json: {Name:mk17167fb279b033724517938130069093c08bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:24:19.148372    4908 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:24:19.148399    4908 start.go:313] acquiring machines lock for multinode-20210813202419-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:24:19.148440    4908 start.go:317] acquired machines lock for "multinode-20210813202419-30853" in 25.188µs
	I0813 20:24:19.148460    4908 start.go:89] Provisioning new machine with config: &{Name:multinode-20210813202419-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.21.3 ClusterName:multinode-20210813202419-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:24:19.148527    4908 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 20:24:19.150306    4908 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 20:24:19.150406    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:24:19.150445    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:24:19.160407    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0813 20:24:19.160846    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:24:19.161473    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:24:19.161494    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:24:19.161818    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:24:19.161997    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetMachineName
	I0813 20:24:19.162159    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:19.162292    4908 start.go:160] libmachine.API.Create for "multinode-20210813202419-30853" (driver="kvm2")
	I0813 20:24:19.162325    4908 client.go:168] LocalClient.Create starting
	I0813 20:24:19.162368    4908 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:24:19.162428    4908 main.go:130] libmachine: Decoding PEM data...
	I0813 20:24:19.162451    4908 main.go:130] libmachine: Parsing certificate...
	I0813 20:24:19.162609    4908 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:24:19.162633    4908 main.go:130] libmachine: Decoding PEM data...
	I0813 20:24:19.162653    4908 main.go:130] libmachine: Parsing certificate...
	I0813 20:24:19.162707    4908 main.go:130] libmachine: Running pre-create checks...
	I0813 20:24:19.162723    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .PreCreateCheck
	I0813 20:24:19.163065    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetConfigRaw
	I0813 20:24:19.163453    4908 main.go:130] libmachine: Creating machine...
	I0813 20:24:19.163468    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .Create
	I0813 20:24:19.163590    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Creating KVM machine...
	I0813 20:24:19.166176    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found existing default KVM network
	I0813 20:24:19.167250    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:19.167105    4931 network.go:288] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc0000a85d0] misses:0}
	I0813 20:24:19.167288    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:19.167200    4931 network.go:235] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:24:19.188898    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | trying to create private KVM network mk-multinode-20210813202419-30853 192.168.39.0/24...
	I0813 20:24:19.455247    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | private KVM network mk-multinode-20210813202419-30853 192.168.39.0/24 created
	I0813 20:24:19.455284    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853 ...
	I0813 20:24:19.455309    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:19.455217    4931 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:24:19.455351    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 20:24:19.455439    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 20:24:19.635198    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:19.635089    4931 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa...
	I0813 20:24:19.793160    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:19.793049    4931 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/multinode-20210813202419-30853.rawdisk...
	I0813 20:24:19.793197    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Writing magic tar header
	I0813 20:24:19.793212    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Writing SSH key tar header
	I0813 20:24:19.793227    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:19.793190    4931 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853 ...
	I0813 20:24:19.793370    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853
	I0813 20:24:19.793415    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 20:24:19.793441    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853 (perms=drwx------)
	I0813 20:24:19.793479    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 20:24:19.793498    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:24:19.793514    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 20:24:19.793527    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337
	I0813 20:24:19.793541    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 20:24:19.793555    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Checking permissions on dir: /home/jenkins
	I0813 20:24:19.793567    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Checking permissions on dir: /home
	I0813 20:24:19.793579    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Skipping /home - not owner
	I0813 20:24:19.793598    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 20:24:19.793616    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 20:24:19.793625    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 20:24:19.793634    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Creating domain...
	I0813 20:24:19.818640    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:87:3c:24 in network default
	I0813 20:24:19.819123    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Ensuring networks are active...
	I0813 20:24:19.819168    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:19.820954    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Ensuring network default is active
	I0813 20:24:19.821267    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Ensuring network mk-multinode-20210813202419-30853 is active
	I0813 20:24:19.821773    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Getting domain xml...
	I0813 20:24:19.823495    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Creating domain...
	I0813 20:24:20.266043    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Waiting to get IP...
	I0813 20:24:20.266736    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:20.267269    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:20.267288    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:20.267227    4931 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 20:24:20.531381    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:20.531825    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:20.531847    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:20.531787    4931 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 20:24:20.914176    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:20.914608    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:20.914642    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:20.914550    4931 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 20:24:21.339152    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:21.339573    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:21.339600    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:21.339513    4931 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 20:24:21.813990    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:21.814505    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:21.814535    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:21.814453    4931 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 20:24:22.403267    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:22.403697    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:22.403720    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:22.403665    4931 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 20:24:23.238942    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:23.239305    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:23.239337    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:23.239266    4931 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 20:24:23.987123    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:23.987594    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:23.987625    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:23.987526    4931 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 20:24:24.975907    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:24.976378    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:24.976410    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:24.976302    4931 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 20:24:26.167579    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:26.168038    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:26.168068    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:26.167984    4931 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 20:24:27.847780    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:27.848253    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:27.848285    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:27.848180    4931 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 20:24:30.195294    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:30.195832    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find current IP address of domain multinode-20210813202419-30853 in network mk-multinode-20210813202419-30853
	I0813 20:24:30.195866    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | I0813 20:24:30.195763    4931 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 20:24:33.566130    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.566593    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Found IP for machine: 192.168.39.64
	I0813 20:24:33.566617    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Reserving static IP address...
	I0813 20:24:33.566631    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has current primary IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.566933    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | unable to find host DHCP lease matching {name: "multinode-20210813202419-30853", mac: "52:54:00:16:ef:64", ip: "192.168.39.64"} in network mk-multinode-20210813202419-30853
	I0813 20:24:33.613374    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Getting to WaitForSSH function...
	I0813 20:24:33.613407    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Reserved static IP address: 192.168.39.64
	I0813 20:24:33.613422    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Waiting for SSH to be available...
	I0813 20:24:33.618672    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.619035    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:minikube Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:33.619071    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.619162    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Using SSH client type: external
	I0813 20:24:33.619198    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa (-rw-------)
	I0813 20:24:33.619239    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 20:24:33.619256    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | About to run SSH command:
	I0813 20:24:33.619287    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | exit 0
	I0813 20:24:33.750338    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 20:24:33.750831    4908 main.go:130] libmachine: (multinode-20210813202419-30853) KVM machine creation complete!
	I0813 20:24:33.750910    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetConfigRaw
	I0813 20:24:33.751457    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:33.751637    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:33.751796    4908 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 20:24:33.751815    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetState
	I0813 20:24:33.754204    4908 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 20:24:33.754217    4908 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 20:24:33.754223    4908 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 20:24:33.754230    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:33.758462    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.758775    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:33.758808    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.758928    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:33.759077    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:33.759207    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:33.759302    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:33.759425    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:24:33.759645    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0813 20:24:33.759659    4908 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 20:24:33.878033    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:24:33.878066    4908 main.go:130] libmachine: Detecting the provisioner...
	I0813 20:24:33.878078    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:33.883049    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.883397    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:33.883436    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:33.883491    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:33.883659    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:33.883806    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:33.883931    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:33.884070    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:24:33.884248    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0813 20:24:33.884263    4908 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 20:24:34.003403    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 20:24:34.003491    4908 main.go:130] libmachine: found compatible host: buildroot
	I0813 20:24:34.003508    4908 main.go:130] libmachine: Provisioning with buildroot...
	I0813 20:24:34.003520    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetMachineName
	I0813 20:24:34.003750    4908 buildroot.go:166] provisioning hostname "multinode-20210813202419-30853"
	I0813 20:24:34.003775    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetMachineName
	I0813 20:24:34.003937    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:34.009088    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.009448    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:34.009484    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.009595    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:34.009749    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:34.009913    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:34.010047    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:34.010191    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:24:34.010374    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0813 20:24:34.010394    4908 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210813202419-30853 && echo "multinode-20210813202419-30853" | sudo tee /etc/hostname
	I0813 20:24:34.139277    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210813202419-30853
	
	I0813 20:24:34.139301    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:34.144096    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.144392    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:34.144422    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.144565    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:34.144746    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:34.144868    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:34.144992    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:34.145110    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:24:34.145272    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0813 20:24:34.145300    4908 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210813202419-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210813202419-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210813202419-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:24:34.268587    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:24:34.268621    4908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:24:34.268648    4908 buildroot.go:174] setting up certificates
	I0813 20:24:34.268660    4908 provision.go:83] configureAuth start
	I0813 20:24:34.268671    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetMachineName
	I0813 20:24:34.268934    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetIP
	I0813 20:24:34.273903    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.274197    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:34.274222    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.274304    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:34.278459    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.278737    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:34.278767    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.278809    4908 provision.go:138] copyHostCerts
	I0813 20:24:34.278842    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:24:34.278906    4908 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:24:34.278919    4908 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:24:34.278981    4908 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:24:34.279046    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:24:34.279066    4908 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:24:34.279073    4908 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:24:34.279097    4908 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:24:34.279134    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:24:34.279152    4908 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:24:34.279159    4908 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:24:34.279176    4908 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:24:34.279216    4908 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.multinode-20210813202419-30853 san=[192.168.39.64 192.168.39.64 localhost 127.0.0.1 minikube multinode-20210813202419-30853]
	I0813 20:24:34.442793    4908 provision.go:172] copyRemoteCerts
	I0813 20:24:34.442870    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:24:34.442898    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:34.448214    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.448500    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:34.448537    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.448680    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:34.448866    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:34.449000    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:34.449121    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:24:34.534014    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0813 20:24:34.534071    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:24:34.549882    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0813 20:24:34.549923    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0813 20:24:34.565408    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0813 20:24:34.565453    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:24:34.581923    4908 provision.go:86] duration metric: configureAuth took 313.252596ms
	I0813 20:24:34.581944    4908 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:24:34.582138    4908 config.go:177] Loaded profile config "multinode-20210813202419-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:24:34.582242    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:34.586926    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.587237    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:34.587277    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:34.587376    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:34.587530    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:34.587654    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:34.587749    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:34.587859    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:24:34.588004    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0813 20:24:34.588026    4908 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:24:35.276577    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:24:35.276616    4908 main.go:130] libmachine: Checking connection to Docker...
	I0813 20:24:35.276630    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetURL
	I0813 20:24:35.279350    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Using libvirt version 3000000
	I0813 20:24:35.283656    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.283931    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:35.283964    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.284125    4908 main.go:130] libmachine: Docker is up and running!
	I0813 20:24:35.284142    4908 main.go:130] libmachine: Reticulating splines...
	I0813 20:24:35.284150    4908 client.go:171] LocalClient.Create took 16.121814174s
	I0813 20:24:35.284168    4908 start.go:168] duration metric: libmachine.API.Create for "multinode-20210813202419-30853" took 16.121878034s
	I0813 20:24:35.284175    4908 start.go:267] post-start starting for "multinode-20210813202419-30853" (driver="kvm2")
	I0813 20:24:35.284183    4908 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:24:35.284200    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:35.284445    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:24:35.284473    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:35.288791    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.289071    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:35.289105    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.289203    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:35.289371    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:35.289538    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:35.289728    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:24:35.374303    4908 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:24:35.378640    4908 command_runner.go:124] > NAME=Buildroot
	I0813 20:24:35.378660    4908 command_runner.go:124] > VERSION=2020.02.12
	I0813 20:24:35.378667    4908 command_runner.go:124] > ID=buildroot
	I0813 20:24:35.378673    4908 command_runner.go:124] > VERSION_ID=2020.02.12
	I0813 20:24:35.378681    4908 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0813 20:24:35.378899    4908 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:24:35.378921    4908 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:24:35.378973    4908 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:24:35.379077    4908 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 20:24:35.379091    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> /etc/ssl/certs/308532.pem
	I0813 20:24:35.379194    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:24:35.385603    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:24:35.402228    4908 start.go:270] post-start completed in 118.036342ms
	I0813 20:24:35.402278    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetConfigRaw
	I0813 20:24:35.402846    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetIP
	I0813 20:24:35.407835    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.408144    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:35.408173    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.408459    4908 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/config.json ...
	I0813 20:24:35.408672    4908 start.go:129] duration metric: createHost completed in 16.260135775s
	I0813 20:24:35.408688    4908 start.go:80] releasing machines lock for "multinode-20210813202419-30853", held for 16.260236178s
	I0813 20:24:35.408724    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:35.408924    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetIP
	I0813 20:24:35.413018    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.413273    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:35.413302    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.413443    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:35.413611    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:35.414029    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:24:35.414255    4908 ssh_runner.go:149] Run: systemctl --version
	I0813 20:24:35.414282    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:35.414301    4908 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:24:35.414345    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:24:35.418862    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.419229    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:35.419257    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.419326    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:35.419488    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:35.419647    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:35.419775    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:24:35.419937    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.420236    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:35.420262    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:35.420385    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:24:35.420521    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:24:35.420681    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:24:35.420796    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:24:35.522808    4908 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0813 20:24:35.522840    4908 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0813 20:24:35.522848    4908 command_runner.go:124] > <H1>302 Moved</H1>
	I0813 20:24:35.522867    4908 command_runner.go:124] > The document has moved
	I0813 20:24:35.522877    4908 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0813 20:24:35.522883    4908 command_runner.go:124] > </BODY></HTML>
	I0813 20:24:35.522927    4908 command_runner.go:124] > systemd 244 (244)
	I0813 20:24:35.522951    4908 command_runner.go:124] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0813 20:24:35.522972    4908 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:24:35.523085    4908 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:24:35.546257    4908 command_runner.go:124] ! time="2021-08-13T20:24:35Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0813 20:24:37.531845    4908 command_runner.go:124] ! time="2021-08-13T20:24:37Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 20:24:39.519439    4908 command_runner.go:124] ! time="2021-08-13T20:24:39Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 20:24:39.524339    4908 command_runner.go:124] > {
	I0813 20:24:39.524356    4908 command_runner.go:124] >   "images": [
	I0813 20:24:39.524360    4908 command_runner.go:124] >   ]
	I0813 20:24:39.524363    4908 command_runner.go:124] > }
	I0813 20:24:39.524380    4908 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.001278627s)
	I0813 20:24:39.524462    4908 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 20:24:39.524509    4908 ssh_runner.go:149] Run: which lz4
	I0813 20:24:39.528376    4908 command_runner.go:124] > /bin/lz4
	I0813 20:24:39.528629    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0813 20:24:39.528711    4908 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 20:24:39.532551    4908 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:24:39.532975    4908 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:24:39.533004    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0813 20:24:42.603774    4908 crio.go:362] Took 3.075096 seconds to copy over tarball
	I0813 20:24:42.603862    4908 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 20:24:47.556783    4908 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.952891602s)
	I0813 20:24:47.556814    4908 crio.go:369] Took 4.953008 seconds t extract the tarball
	I0813 20:24:47.556824    4908 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 20:24:47.596055    4908 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:24:47.609155    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:24:47.620074    4908 docker.go:153] disabling docker service ...
	I0813 20:24:47.620124    4908 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:24:47.631270    4908 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:24:47.640076    4908 command_runner.go:124] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0813 20:24:47.640166    4908 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:24:47.809318    4908 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0813 20:24:47.809395    4908 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:24:47.944580    4908 command_runner.go:124] ! Unit docker.service does not exist, proceeding anyway.
	I0813 20:24:47.944607    4908 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0813 20:24:47.944664    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:24:47.955484    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:24:47.968128    4908 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0813 20:24:47.968150    4908 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0813 20:24:47.968180    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:24:47.976181    4908 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:24:47.976208    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:24:47.983629    4908 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:24:47.989652    4908 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:24:47.989883    4908 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:24:47.989919    4908 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:24:48.005172    4908 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:24:48.011478    4908 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:24:48.136091    4908 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:24:48.244318    4908 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:24:48.244400    4908 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:24:48.249298    4908 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0813 20:24:48.249316    4908 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0813 20:24:48.249323    4908 command_runner.go:124] > Device: 14h/20d	Inode: 28443       Links: 1
	I0813 20:24:48.249330    4908 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 20:24:48.249335    4908 command_runner.go:124] > Access: 2021-08-13 20:24:39.469005242 +0000
	I0813 20:24:48.249341    4908 command_runner.go:124] > Modify: 2021-08-13 20:24:35.170685970 +0000
	I0813 20:24:48.249346    4908 command_runner.go:124] > Change: 2021-08-13 20:24:35.170685970 +0000
	I0813 20:24:48.249351    4908 command_runner.go:124] >  Birth: -
	I0813 20:24:48.249623    4908 start.go:413] Will wait 60s for crictl version
	I0813 20:24:48.249661    4908 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:24:48.285188    4908 command_runner.go:124] > Version:  0.1.0
	I0813 20:24:48.285424    4908 command_runner.go:124] > RuntimeName:  cri-o
	I0813 20:24:48.285458    4908 command_runner.go:124] > RuntimeVersion:  1.20.2
	I0813 20:24:48.285563    4908 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0813 20:24:48.287393    4908 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 20:24:48.287482    4908 ssh_runner.go:149] Run: crio --version
	I0813 20:24:48.502168    4908 command_runner.go:124] > crio version 1.20.2
	I0813 20:24:48.502197    4908 command_runner.go:124] > Version:       1.20.2
	I0813 20:24:48.502207    4908 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 20:24:48.502213    4908 command_runner.go:124] > GitTreeState:  clean
	I0813 20:24:48.502223    4908 command_runner.go:124] > BuildDate:     2021-08-10T19:57:38Z
	I0813 20:24:48.502230    4908 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 20:24:48.502236    4908 command_runner.go:124] > Compiler:      gc
	I0813 20:24:48.502243    4908 command_runner.go:124] > Platform:      linux/amd64
	I0813 20:24:48.503436    4908 command_runner.go:124] ! time="2021-08-13T20:24:48Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 20:24:48.503524    4908 ssh_runner.go:149] Run: crio --version
	I0813 20:24:48.787084    4908 command_runner.go:124] > crio version 1.20.2
	I0813 20:24:48.787106    4908 command_runner.go:124] > Version:       1.20.2
	I0813 20:24:48.787115    4908 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 20:24:48.787120    4908 command_runner.go:124] > GitTreeState:  clean
	I0813 20:24:48.787127    4908 command_runner.go:124] > BuildDate:     2021-08-10T19:57:38Z
	I0813 20:24:48.787132    4908 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 20:24:48.787136    4908 command_runner.go:124] > Compiler:      gc
	I0813 20:24:48.787141    4908 command_runner.go:124] > Platform:      linux/amd64
	I0813 20:24:48.788464    4908 command_runner.go:124] ! time="2021-08-13T20:24:48Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 20:24:50.150742    4908 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 20:24:50.151102    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetIP
	I0813 20:24:50.156784    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:50.157038    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:24:50.157071    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:24:50.157267    4908 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 20:24:50.162427    4908 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:24:50.174514    4908 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:24:50.174569    4908 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:24:50.247001    4908 command_runner.go:124] > {
	I0813 20:24:50.247031    4908 command_runner.go:124] >   "images": [
	I0813 20:24:50.247038    4908 command_runner.go:124] >     {
	I0813 20:24:50.247050    4908 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0813 20:24:50.247057    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247067    4908 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0813 20:24:50.247073    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247079    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247102    4908 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0813 20:24:50.247115    4908 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0813 20:24:50.247123    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247138    4908 command_runner.go:124] >       "size": "119984626",
	I0813 20:24:50.247148    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.247154    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247162    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247167    4908 command_runner.go:124] >     },
	I0813 20:24:50.247173    4908 command_runner.go:124] >     {
	I0813 20:24:50.247184    4908 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0813 20:24:50.247191    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247200    4908 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0813 20:24:50.247205    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247213    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247225    4908 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0813 20:24:50.247240    4908 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0813 20:24:50.247249    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247256    4908 command_runner.go:124] >       "size": "228528983",
	I0813 20:24:50.247263    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.247270    4908 command_runner.go:124] >       "username": "nonroot",
	I0813 20:24:50.247292    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247300    4908 command_runner.go:124] >     },
	I0813 20:24:50.247305    4908 command_runner.go:124] >     {
	I0813 20:24:50.247312    4908 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0813 20:24:50.247317    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247323    4908 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0813 20:24:50.247327    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247331    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247341    4908 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0813 20:24:50.247349    4908 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0813 20:24:50.247358    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247363    4908 command_runner.go:124] >       "size": "36950651",
	I0813 20:24:50.247367    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.247373    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247376    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247380    4908 command_runner.go:124] >     },
	I0813 20:24:50.247383    4908 command_runner.go:124] >     {
	I0813 20:24:50.247390    4908 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0813 20:24:50.247395    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247400    4908 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0813 20:24:50.247403    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247408    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247420    4908 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0813 20:24:50.247431    4908 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0813 20:24:50.247434    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247439    4908 command_runner.go:124] >       "size": "31470524",
	I0813 20:24:50.247445    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.247450    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247454    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247457    4908 command_runner.go:124] >     },
	I0813 20:24:50.247460    4908 command_runner.go:124] >     {
	I0813 20:24:50.247467    4908 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0813 20:24:50.247472    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247477    4908 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0813 20:24:50.247481    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247485    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247492    4908 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0813 20:24:50.247501    4908 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0813 20:24:50.247504    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247510    4908 command_runner.go:124] >       "size": "42585056",
	I0813 20:24:50.247514    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.247518    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247523    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247526    4908 command_runner.go:124] >     },
	I0813 20:24:50.247530    4908 command_runner.go:124] >     {
	I0813 20:24:50.247536    4908 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0813 20:24:50.247541    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247546    4908 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0813 20:24:50.247549    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247553    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247562    4908 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0813 20:24:50.247570    4908 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0813 20:24:50.247574    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247578    4908 command_runner.go:124] >       "size": "254662613",
	I0813 20:24:50.247582    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.247586    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247590    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247593    4908 command_runner.go:124] >     },
	I0813 20:24:50.247597    4908 command_runner.go:124] >     {
	I0813 20:24:50.247603    4908 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0813 20:24:50.247608    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247618    4908 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0813 20:24:50.247623    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247628    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247637    4908 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0813 20:24:50.247645    4908 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0813 20:24:50.247649    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247653    4908 command_runner.go:124] >       "size": "126878961",
	I0813 20:24:50.247657    4908 command_runner.go:124] >       "uid": {
	I0813 20:24:50.247661    4908 command_runner.go:124] >         "value": "0"
	I0813 20:24:50.247665    4908 command_runner.go:124] >       },
	I0813 20:24:50.247669    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247672    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247676    4908 command_runner.go:124] >     },
	I0813 20:24:50.247679    4908 command_runner.go:124] >     {
	I0813 20:24:50.247688    4908 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0813 20:24:50.247695    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247703    4908 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0813 20:24:50.247711    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247718    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247731    4908 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0813 20:24:50.247746    4908 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0813 20:24:50.247751    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247763    4908 command_runner.go:124] >       "size": "121087578",
	I0813 20:24:50.247768    4908 command_runner.go:124] >       "uid": {
	I0813 20:24:50.247775    4908 command_runner.go:124] >         "value": "0"
	I0813 20:24:50.247780    4908 command_runner.go:124] >       },
	I0813 20:24:50.247831    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247840    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247844    4908 command_runner.go:124] >     },
	I0813 20:24:50.247847    4908 command_runner.go:124] >     {
	I0813 20:24:50.247854    4908 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0813 20:24:50.247858    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247863    4908 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0813 20:24:50.247866    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247871    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247880    4908 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0813 20:24:50.247887    4908 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0813 20:24:50.247892    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247896    4908 command_runner.go:124] >       "size": "105129702",
	I0813 20:24:50.247904    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.247912    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247917    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247922    4908 command_runner.go:124] >     },
	I0813 20:24:50.247925    4908 command_runner.go:124] >     {
	I0813 20:24:50.247931    4908 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0813 20:24:50.247936    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.247941    4908 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0813 20:24:50.247944    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247948    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.247956    4908 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0813 20:24:50.247964    4908 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0813 20:24:50.247968    4908 command_runner.go:124] >       ],
	I0813 20:24:50.247972    4908 command_runner.go:124] >       "size": "51893338",
	I0813 20:24:50.247976    4908 command_runner.go:124] >       "uid": {
	I0813 20:24:50.247979    4908 command_runner.go:124] >         "value": "0"
	I0813 20:24:50.247983    4908 command_runner.go:124] >       },
	I0813 20:24:50.247987    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.247991    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.247994    4908 command_runner.go:124] >     },
	I0813 20:24:50.247997    4908 command_runner.go:124] >     {
	I0813 20:24:50.248004    4908 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0813 20:24:50.248008    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.248013    4908 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0813 20:24:50.248016    4908 command_runner.go:124] >       ],
	I0813 20:24:50.248020    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.248027    4908 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0813 20:24:50.248035    4908 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0813 20:24:50.248039    4908 command_runner.go:124] >       ],
	I0813 20:24:50.248043    4908 command_runner.go:124] >       "size": "689817",
	I0813 20:24:50.248047    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.248051    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.248055    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.248058    4908 command_runner.go:124] >     }
	I0813 20:24:50.248061    4908 command_runner.go:124] >   ]
	I0813 20:24:50.248064    4908 command_runner.go:124] > }
	I0813 20:24:50.248208    4908 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:24:50.248223    4908 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:24:50.248290    4908 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:24:50.281549    4908 command_runner.go:124] > {
	I0813 20:24:50.281590    4908 command_runner.go:124] >   "images": [
	I0813 20:24:50.281597    4908 command_runner.go:124] >     {
	I0813 20:24:50.281610    4908 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0813 20:24:50.281618    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.281629    4908 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0813 20:24:50.281637    4908 command_runner.go:124] >       ],
	I0813 20:24:50.281645    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.281663    4908 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0813 20:24:50.281680    4908 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0813 20:24:50.281687    4908 command_runner.go:124] >       ],
	I0813 20:24:50.281695    4908 command_runner.go:124] >       "size": "119984626",
	I0813 20:24:50.281704    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.281712    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.281729    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.281738    4908 command_runner.go:124] >     },
	I0813 20:24:50.281744    4908 command_runner.go:124] >     {
	I0813 20:24:50.281755    4908 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0813 20:24:50.281766    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.281775    4908 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0813 20:24:50.281785    4908 command_runner.go:124] >       ],
	I0813 20:24:50.281796    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.281813    4908 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0813 20:24:50.281830    4908 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0813 20:24:50.281838    4908 command_runner.go:124] >       ],
	I0813 20:24:50.281846    4908 command_runner.go:124] >       "size": "228528983",
	I0813 20:24:50.281855    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.281863    4908 command_runner.go:124] >       "username": "nonroot",
	I0813 20:24:50.281905    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.281933    4908 command_runner.go:124] >     },
	I0813 20:24:50.281940    4908 command_runner.go:124] >     {
	I0813 20:24:50.281968    4908 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0813 20:24:50.281980    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.281990    4908 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0813 20:24:50.282001    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282009    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.282027    4908 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0813 20:24:50.282046    4908 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0813 20:24:50.282055    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282066    4908 command_runner.go:124] >       "size": "36950651",
	I0813 20:24:50.282082    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.282092    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.282099    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.282109    4908 command_runner.go:124] >     },
	I0813 20:24:50.282116    4908 command_runner.go:124] >     {
	I0813 20:24:50.282131    4908 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0813 20:24:50.282140    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.282148    4908 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0813 20:24:50.282156    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282163    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.282182    4908 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0813 20:24:50.282199    4908 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0813 20:24:50.282207    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282215    4908 command_runner.go:124] >       "size": "31470524",
	I0813 20:24:50.282228    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.282238    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.282246    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.282254    4908 command_runner.go:124] >     },
	I0813 20:24:50.282260    4908 command_runner.go:124] >     {
	I0813 20:24:50.282273    4908 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0813 20:24:50.282285    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.282297    4908 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0813 20:24:50.282304    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282313    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.282326    4908 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0813 20:24:50.282344    4908 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0813 20:24:50.282353    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282361    4908 command_runner.go:124] >       "size": "42585056",
	I0813 20:24:50.282372    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.282380    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.282392    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.282398    4908 command_runner.go:124] >     },
	I0813 20:24:50.282406    4908 command_runner.go:124] >     {
	I0813 20:24:50.282417    4908 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0813 20:24:50.282429    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.282438    4908 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0813 20:24:50.282450    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282458    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.282474    4908 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0813 20:24:50.282495    4908 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0813 20:24:50.282505    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282516    4908 command_runner.go:124] >       "size": "254662613",
	I0813 20:24:50.282527    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.282534    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.282543    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.282549    4908 command_runner.go:124] >     },
	I0813 20:24:50.282557    4908 command_runner.go:124] >     {
	I0813 20:24:50.282568    4908 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0813 20:24:50.282580    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.282588    4908 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0813 20:24:50.282598    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282606    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.282623    4908 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0813 20:24:50.282638    4908 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0813 20:24:50.282644    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282654    4908 command_runner.go:124] >       "size": "126878961",
	I0813 20:24:50.282661    4908 command_runner.go:124] >       "uid": {
	I0813 20:24:50.282672    4908 command_runner.go:124] >         "value": "0"
	I0813 20:24:50.282679    4908 command_runner.go:124] >       },
	I0813 20:24:50.282689    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.282701    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.282707    4908 command_runner.go:124] >     },
	I0813 20:24:50.282717    4908 command_runner.go:124] >     {
	I0813 20:24:50.282729    4908 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0813 20:24:50.282739    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.282749    4908 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0813 20:24:50.282757    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282765    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.282782    4908 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0813 20:24:50.282798    4908 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0813 20:24:50.282807    4908 command_runner.go:124] >       ],
	I0813 20:24:50.282880    4908 command_runner.go:124] >       "size": "121087578",
	I0813 20:24:50.282896    4908 command_runner.go:124] >       "uid": {
	I0813 20:24:50.282904    4908 command_runner.go:124] >         "value": "0"
	I0813 20:24:50.282910    4908 command_runner.go:124] >       },
	I0813 20:24:50.282956    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.282968    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.282975    4908 command_runner.go:124] >     },
	I0813 20:24:50.282989    4908 command_runner.go:124] >     {
	I0813 20:24:50.283005    4908 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0813 20:24:50.283014    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.283022    4908 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0813 20:24:50.283031    4908 command_runner.go:124] >       ],
	I0813 20:24:50.283038    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.283054    4908 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0813 20:24:50.283071    4908 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0813 20:24:50.283082    4908 command_runner.go:124] >       ],
	I0813 20:24:50.283091    4908 command_runner.go:124] >       "size": "105129702",
	I0813 20:24:50.283103    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.283110    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.283119    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.283125    4908 command_runner.go:124] >     },
	I0813 20:24:50.283133    4908 command_runner.go:124] >     {
	I0813 20:24:50.283144    4908 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0813 20:24:50.283156    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.283165    4908 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0813 20:24:50.283175    4908 command_runner.go:124] >       ],
	I0813 20:24:50.283183    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.283207    4908 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0813 20:24:50.283219    4908 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0813 20:24:50.283228    4908 command_runner.go:124] >       ],
	I0813 20:24:50.283236    4908 command_runner.go:124] >       "size": "51893338",
	I0813 20:24:50.283242    4908 command_runner.go:124] >       "uid": {
	I0813 20:24:50.283250    4908 command_runner.go:124] >         "value": "0"
	I0813 20:24:50.283258    4908 command_runner.go:124] >       },
	I0813 20:24:50.283274    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.283281    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.283296    4908 command_runner.go:124] >     },
	I0813 20:24:50.283303    4908 command_runner.go:124] >     {
	I0813 20:24:50.283314    4908 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0813 20:24:50.283325    4908 command_runner.go:124] >       "repoTags": [
	I0813 20:24:50.283333    4908 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0813 20:24:50.283343    4908 command_runner.go:124] >       ],
	I0813 20:24:50.283351    4908 command_runner.go:124] >       "repoDigests": [
	I0813 20:24:50.283364    4908 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0813 20:24:50.283380    4908 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0813 20:24:50.283390    4908 command_runner.go:124] >       ],
	I0813 20:24:50.283403    4908 command_runner.go:124] >       "size": "689817",
	I0813 20:24:50.283415    4908 command_runner.go:124] >       "uid": null,
	I0813 20:24:50.283428    4908 command_runner.go:124] >       "username": "",
	I0813 20:24:50.283438    4908 command_runner.go:124] >       "spec": null
	I0813 20:24:50.283444    4908 command_runner.go:124] >     }
	I0813 20:24:50.283454    4908 command_runner.go:124] >   ]
	I0813 20:24:50.283460    4908 command_runner.go:124] > }
	I0813 20:24:50.283650    4908 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:24:50.283668    4908 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:24:50.283768    4908 ssh_runner.go:149] Run: crio config
	I0813 20:24:50.543400    4908 command_runner.go:124] ! time="2021-08-13T20:24:50Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 20:24:50.545073    4908 command_runner.go:124] ! time="2021-08-13T20:24:50Z" level=warning msg="The 'registries' option in crio.conf(5) (referenced in \"/etc/crio/crio.conf\") has been deprecated and will be removed with CRI-O 1.21."
	I0813 20:24:50.545101    4908 command_runner.go:124] ! time="2021-08-13T20:24:50Z" level=warning msg="Please refer to containers-registries.conf(5) for configuring unqualified-search registries."
	I0813 20:24:50.547870    4908 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0813 20:24:50.550327    4908 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0813 20:24:50.550345    4908 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0813 20:24:50.550352    4908 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0813 20:24:50.550362    4908 command_runner.go:124] > #
	I0813 20:24:50.550386    4908 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0813 20:24:50.550400    4908 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0813 20:24:50.550410    4908 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0813 20:24:50.550420    4908 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0813 20:24:50.550424    4908 command_runner.go:124] > # reload'.
	I0813 20:24:50.550431    4908 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0813 20:24:50.550440    4908 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0813 20:24:50.550447    4908 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0813 20:24:50.550454    4908 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0813 20:24:50.550457    4908 command_runner.go:124] > [crio]
	I0813 20:24:50.550464    4908 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0813 20:24:50.550475    4908 command_runner.go:124] > # containers images, in this directory.
	I0813 20:24:50.550484    4908 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0813 20:24:50.550498    4908 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0813 20:24:50.550509    4908 command_runner.go:124] > #runroot = "/var/run/containers/storage"
	I0813 20:24:50.550534    4908 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0813 20:24:50.550543    4908 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0813 20:24:50.550549    4908 command_runner.go:124] > #storage_driver = "overlay"
	I0813 20:24:50.550558    4908 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0813 20:24:50.550569    4908 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0813 20:24:50.550577    4908 command_runner.go:124] > #storage_option = [
	I0813 20:24:50.550582    4908 command_runner.go:124] > #]
	I0813 20:24:50.550594    4908 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0813 20:24:50.550607    4908 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0813 20:24:50.550618    4908 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0813 20:24:50.550625    4908 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0813 20:24:50.550634    4908 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0813 20:24:50.550643    4908 command_runner.go:124] > # always happen on a node reboot
	I0813 20:24:50.550649    4908 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0813 20:24:50.550655    4908 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0813 20:24:50.550668    4908 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0813 20:24:50.550692    4908 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0813 20:24:50.550724    4908 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0813 20:24:50.550737    4908 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0813 20:24:50.550741    4908 command_runner.go:124] > [crio.api]
	I0813 20:24:50.550747    4908 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0813 20:24:50.550752    4908 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0813 20:24:50.550758    4908 command_runner.go:124] > # IP address on which the stream server will listen.
	I0813 20:24:50.550768    4908 command_runner.go:124] > stream_address = "127.0.0.1"
	I0813 20:24:50.550780    4908 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0813 20:24:50.550788    4908 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0813 20:24:50.550798    4908 command_runner.go:124] > stream_port = "0"
	I0813 20:24:50.550807    4908 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0813 20:24:50.550816    4908 command_runner.go:124] > stream_enable_tls = false
	I0813 20:24:50.550826    4908 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0813 20:24:50.550842    4908 command_runner.go:124] > stream_idle_timeout = ""
	I0813 20:24:50.550869    4908 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0813 20:24:50.550886    4908 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0813 20:24:50.550892    4908 command_runner.go:124] > # minutes.
	I0813 20:24:50.550898    4908 command_runner.go:124] > stream_tls_cert = ""
	I0813 20:24:50.550908    4908 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0813 20:24:50.550927    4908 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0813 20:24:50.550938    4908 command_runner.go:124] > stream_tls_key = ""
	I0813 20:24:50.550950    4908 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0813 20:24:50.550963    4908 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0813 20:24:50.550972    4908 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0813 20:24:50.550977    4908 command_runner.go:124] > stream_tls_ca = ""
	I0813 20:24:50.550993    4908 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 20:24:50.551003    4908 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0813 20:24:50.551015    4908 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 20:24:50.551025    4908 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0813 20:24:50.551039    4908 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0813 20:24:50.551051    4908 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0813 20:24:50.551060    4908 command_runner.go:124] > [crio.runtime]
	I0813 20:24:50.551072    4908 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0813 20:24:50.551081    4908 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0813 20:24:50.551090    4908 command_runner.go:124] > # "nofile=1024:2048"
	I0813 20:24:50.551105    4908 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0813 20:24:50.551115    4908 command_runner.go:124] > #default_ulimits = [
	I0813 20:24:50.551124    4908 command_runner.go:124] > #]
	I0813 20:24:50.551146    4908 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0813 20:24:50.551157    4908 command_runner.go:124] > no_pivot = false
	I0813 20:24:50.551164    4908 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0813 20:24:50.551182    4908 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0813 20:24:50.551193    4908 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0813 20:24:50.551206    4908 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0813 20:24:50.551219    4908 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0813 20:24:50.551230    4908 command_runner.go:124] > conmon = "/usr/libexec/crio/conmon"
	I0813 20:24:50.551240    4908 command_runner.go:124] > # Cgroup setting for conmon
	I0813 20:24:50.551250    4908 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0813 20:24:50.551259    4908 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0813 20:24:50.551264    4908 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0813 20:24:50.551273    4908 command_runner.go:124] > conmon_env = [
	I0813 20:24:50.551283    4908 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0813 20:24:50.551289    4908 command_runner.go:124] > ]
	I0813 20:24:50.551298    4908 command_runner.go:124] > # Additional environment variables to set for all the
	I0813 20:24:50.551308    4908 command_runner.go:124] > # containers. These are overridden if set in the
	I0813 20:24:50.551320    4908 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0813 20:24:50.551329    4908 command_runner.go:124] > default_env = [
	I0813 20:24:50.551333    4908 command_runner.go:124] > ]
	I0813 20:24:50.551344    4908 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0813 20:24:50.551353    4908 command_runner.go:124] > selinux = false
	I0813 20:24:50.551363    4908 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0813 20:24:50.551373    4908 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0813 20:24:50.551384    4908 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0813 20:24:50.551393    4908 command_runner.go:124] > seccomp_profile = ""
	I0813 20:24:50.551402    4908 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0813 20:24:50.551416    4908 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0813 20:24:50.551431    4908 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0813 20:24:50.551441    4908 command_runner.go:124] > # which might increase security.
	I0813 20:24:50.551449    4908 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0813 20:24:50.551461    4908 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0813 20:24:50.551471    4908 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0813 20:24:50.551482    4908 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0813 20:24:50.551496    4908 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0813 20:24:50.551506    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:24:50.551520    4908 command_runner.go:124] > apparmor_profile = "crio-default"
	I0813 20:24:50.551534    4908 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0813 20:24:50.551544    4908 command_runner.go:124] > # irqbalance daemon.
	I0813 20:24:50.551562    4908 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0813 20:24:50.551573    4908 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0813 20:24:50.551582    4908 command_runner.go:124] > cgroup_manager = "systemd"
	I0813 20:24:50.551594    4908 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0813 20:24:50.551601    4908 command_runner.go:124] > separate_pull_cgroup = ""
	I0813 20:24:50.551619    4908 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0813 20:24:50.551635    4908 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0813 20:24:50.551644    4908 command_runner.go:124] > # will be added.
	I0813 20:24:50.551648    4908 command_runner.go:124] > default_capabilities = [
	I0813 20:24:50.551653    4908 command_runner.go:124] > 	"CHOWN",
	I0813 20:24:50.551658    4908 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0813 20:24:50.551666    4908 command_runner.go:124] > 	"FSETID",
	I0813 20:24:50.551672    4908 command_runner.go:124] > 	"FOWNER",
	I0813 20:24:50.551679    4908 command_runner.go:124] > 	"SETGID",
	I0813 20:24:50.551685    4908 command_runner.go:124] > 	"SETUID",
	I0813 20:24:50.551692    4908 command_runner.go:124] > 	"SETPCAP",
	I0813 20:24:50.551698    4908 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0813 20:24:50.551707    4908 command_runner.go:124] > 	"KILL",
	I0813 20:24:50.551712    4908 command_runner.go:124] > ]
	I0813 20:24:50.551723    4908 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0813 20:24:50.551735    4908 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 20:24:50.551744    4908 command_runner.go:124] > default_sysctls = [
	I0813 20:24:50.551749    4908 command_runner.go:124] > ]
	I0813 20:24:50.551758    4908 command_runner.go:124] > # List of additional devices. specified as
	I0813 20:24:50.551773    4908 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0813 20:24:50.551784    4908 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0813 20:24:50.551797    4908 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 20:24:50.551806    4908 command_runner.go:124] > additional_devices = [
	I0813 20:24:50.551811    4908 command_runner.go:124] > ]
	I0813 20:24:50.551820    4908 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0813 20:24:50.551827    4908 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0813 20:24:50.551834    4908 command_runner.go:124] > hooks_dir = [
	I0813 20:24:50.551842    4908 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0813 20:24:50.551850    4908 command_runner.go:124] > ]
	I0813 20:24:50.551860    4908 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0813 20:24:50.551874    4908 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0813 20:24:50.551885    4908 command_runner.go:124] > # its default mounts from the following two files:
	I0813 20:24:50.551893    4908 command_runner.go:124] > #
	I0813 20:24:50.551903    4908 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0813 20:24:50.551922    4908 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0813 20:24:50.551932    4908 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0813 20:24:50.551940    4908 command_runner.go:124] > #
	I0813 20:24:50.551950    4908 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0813 20:24:50.551963    4908 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0813 20:24:50.551977    4908 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0813 20:24:50.551988    4908 command_runner.go:124] > #      only add mounts it finds in this file.
	I0813 20:24:50.551995    4908 command_runner.go:124] > #
	I0813 20:24:50.552001    4908 command_runner.go:124] > #default_mounts_file = ""
	I0813 20:24:50.552010    4908 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0813 20:24:50.552019    4908 command_runner.go:124] > pids_limit = 1024
	I0813 20:24:50.552034    4908 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0813 20:24:50.552047    4908 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0813 20:24:50.552060    4908 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0813 20:24:50.552072    4908 command_runner.go:124] > # limit is never exceeded.
	I0813 20:24:50.552082    4908 command_runner.go:124] > log_size_max = -1
	I0813 20:24:50.552144    4908 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0813 20:24:50.552155    4908 command_runner.go:124] > log_to_journald = false
	I0813 20:24:50.552165    4908 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0813 20:24:50.552174    4908 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0813 20:24:50.552186    4908 command_runner.go:124] > # Path to directory for container attach sockets.
	I0813 20:24:50.552196    4908 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0813 20:24:50.552207    4908 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0813 20:24:50.552214    4908 command_runner.go:124] > bind_mount_prefix = ""
	I0813 20:24:50.552225    4908 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0813 20:24:50.552232    4908 command_runner.go:124] > read_only = false
	I0813 20:24:50.552240    4908 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0813 20:24:50.552254    4908 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0813 20:24:50.552263    4908 command_runner.go:124] > # live configuration reload.
	I0813 20:24:50.552269    4908 command_runner.go:124] > log_level = "info"
	I0813 20:24:50.552280    4908 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0813 20:24:50.552292    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:24:50.552304    4908 command_runner.go:124] > log_filter = ""
	I0813 20:24:50.552317    4908 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0813 20:24:50.552330    4908 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0813 20:24:50.552337    4908 command_runner.go:124] > # separated by comma.
	I0813 20:24:50.552340    4908 command_runner.go:124] > uid_mappings = ""
	I0813 20:24:50.552350    4908 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0813 20:24:50.552364    4908 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0813 20:24:50.552380    4908 command_runner.go:124] > # separated by comma.
	I0813 20:24:50.552390    4908 command_runner.go:124] > gid_mappings = ""
	I0813 20:24:50.552400    4908 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0813 20:24:50.552412    4908 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0813 20:24:50.552424    4908 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0813 20:24:50.552430    4908 command_runner.go:124] > ctr_stop_timeout = 30
	I0813 20:24:50.552437    4908 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0813 20:24:50.552447    4908 command_runner.go:124] > # and manage their lifecycle.
	I0813 20:24:50.552459    4908 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0813 20:24:50.552469    4908 command_runner.go:124] > manage_ns_lifecycle = true
	I0813 20:24:50.552480    4908 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0813 20:24:50.552494    4908 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0813 20:24:50.552505    4908 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0813 20:24:50.552519    4908 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0813 20:24:50.552526    4908 command_runner.go:124] > drop_infra_ctr = false
	I0813 20:24:50.552535    4908 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0813 20:24:50.552547    4908 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0813 20:24:50.552562    4908 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0813 20:24:50.552572    4908 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0813 20:24:50.552582    4908 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0813 20:24:50.552590    4908 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0813 20:24:50.552597    4908 command_runner.go:124] > namespaces_dir = "/var/run"
	I0813 20:24:50.552607    4908 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0813 20:24:50.552611    4908 command_runner.go:124] > pinns_path = "/usr/bin/pinns"
	I0813 20:24:50.552619    4908 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0813 20:24:50.552629    4908 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0813 20:24:50.552639    4908 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0813 20:24:50.552646    4908 command_runner.go:124] > default_runtime = "runc"
	I0813 20:24:50.552658    4908 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0813 20:24:50.552669    4908 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0813 20:24:50.552679    4908 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0813 20:24:50.552688    4908 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0813 20:24:50.552692    4908 command_runner.go:124] > #
	I0813 20:24:50.552697    4908 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0813 20:24:50.552702    4908 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0813 20:24:50.552708    4908 command_runner.go:124] > #  runtime_type = "oci"
	I0813 20:24:50.552715    4908 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0813 20:24:50.552723    4908 command_runner.go:124] > #  privileged_without_host_devices = false
	I0813 20:24:50.552729    4908 command_runner.go:124] > #  allowed_annotations = []
	I0813 20:24:50.552740    4908 command_runner.go:124] > # Where:
	I0813 20:24:50.552748    4908 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0813 20:24:50.552758    4908 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0813 20:24:50.552772    4908 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0813 20:24:50.552783    4908 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0813 20:24:50.552789    4908 command_runner.go:124] > #   in $PATH.
	I0813 20:24:50.552798    4908 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0813 20:24:50.552810    4908 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0813 20:24:50.552821    4908 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0813 20:24:50.552832    4908 command_runner.go:124] > #   state.
	I0813 20:24:50.552845    4908 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0813 20:24:50.552857    4908 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0813 20:24:50.552871    4908 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0813 20:24:50.552883    4908 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0813 20:24:50.552893    4908 command_runner.go:124] > #   The currently recognized values are:
	I0813 20:24:50.552905    4908 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0813 20:24:50.552918    4908 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0813 20:24:50.552930    4908 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0813 20:24:50.552940    4908 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0813 20:24:50.552948    4908 command_runner.go:124] > runtime_path = "/usr/bin/runc"
	I0813 20:24:50.552957    4908 command_runner.go:124] > runtime_type = "oci"
	I0813 20:24:50.552963    4908 command_runner.go:124] > runtime_root = "/run/runc"
	I0813 20:24:50.552973    4908 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0813 20:24:50.552979    4908 command_runner.go:124] > # running containers
	I0813 20:24:50.552987    4908 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0813 20:24:50.552999    4908 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0813 20:24:50.553012    4908 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0813 20:24:50.553024    4908 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0813 20:24:50.553035    4908 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0813 20:24:50.553045    4908 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0813 20:24:50.553051    4908 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0813 20:24:50.553057    4908 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0813 20:24:50.553064    4908 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0813 20:24:50.553074    4908 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0813 20:24:50.553086    4908 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0813 20:24:50.553095    4908 command_runner.go:124] > #
	I0813 20:24:50.553111    4908 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0813 20:24:50.553124    4908 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0813 20:24:50.553136    4908 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0813 20:24:50.553151    4908 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0813 20:24:50.553164    4908 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0813 20:24:50.553173    4908 command_runner.go:124] > [crio.image]
	I0813 20:24:50.553183    4908 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0813 20:24:50.553193    4908 command_runner.go:124] > default_transport = "docker://"
	I0813 20:24:50.553203    4908 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0813 20:24:50.553216    4908 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0813 20:24:50.553227    4908 command_runner.go:124] > global_auth_file = ""
	I0813 20:24:50.553235    4908 command_runner.go:124] > # The image used to instantiate infra containers.
	I0813 20:24:50.553244    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:24:50.553254    4908 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0813 20:24:50.553265    4908 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0813 20:24:50.553278    4908 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0813 20:24:50.553290    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:24:50.553297    4908 command_runner.go:124] > pause_image_auth_file = ""
	I0813 20:24:50.553309    4908 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0813 20:24:50.553321    4908 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0813 20:24:50.553330    4908 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0813 20:24:50.553341    4908 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0813 20:24:50.553352    4908 command_runner.go:124] > pause_command = "/pause"
	I0813 20:24:50.553362    4908 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0813 20:24:50.553376    4908 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0813 20:24:50.553389    4908 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0813 20:24:50.553410    4908 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0813 20:24:50.553421    4908 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0813 20:24:50.553426    4908 command_runner.go:124] > signature_policy = ""
	I0813 20:24:50.553434    4908 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0813 20:24:50.553447    4908 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0813 20:24:50.553457    4908 command_runner.go:124] > # changing them here.
	I0813 20:24:50.553465    4908 command_runner.go:124] > #insecure_registries = "[]"
	I0813 20:24:50.553477    4908 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0813 20:24:50.553488    4908 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0813 20:24:50.553498    4908 command_runner.go:124] > image_volumes = "mkdir"
	I0813 20:24:50.553506    4908 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0813 20:24:50.553519    4908 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0813 20:24:50.553533    4908 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0813 20:24:50.553551    4908 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0813 20:24:50.553562    4908 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0813 20:24:50.553569    4908 command_runner.go:124] > #registries = [
	I0813 20:24:50.553704    4908 command_runner.go:124] > # 	"docker.io",
	I0813 20:24:50.554196    4908 command_runner.go:124] > #]
	I0813 20:24:50.554221    4908 command_runner.go:124] > # Temporary directory to use for storing big files
	I0813 20:24:50.554230    4908 command_runner.go:124] > big_files_temporary_dir = ""
	I0813 20:24:50.554250    4908 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0813 20:24:50.554256    4908 command_runner.go:124] > # CNI plugins.
	I0813 20:24:50.554270    4908 command_runner.go:124] > [crio.network]
	I0813 20:24:50.554284    4908 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0813 20:24:50.554301    4908 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0813 20:24:50.554311    4908 command_runner.go:124] > # cni_default_network = "kindnet"
	I0813 20:24:50.554328    4908 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0813 20:24:50.554341    4908 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0813 20:24:50.554354    4908 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0813 20:24:50.554385    4908 command_runner.go:124] > plugin_dirs = [
	I0813 20:24:50.554395    4908 command_runner.go:124] > 	"/opt/cni/bin/",
	I0813 20:24:50.554400    4908 command_runner.go:124] > ]
	I0813 20:24:50.554420    4908 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0813 20:24:50.554426    4908 command_runner.go:124] > [crio.metrics]
	I0813 20:24:50.554434    4908 command_runner.go:124] > # Globally enable or disable metrics support.
	I0813 20:24:50.554446    4908 command_runner.go:124] > enable_metrics = true
	I0813 20:24:50.554459    4908 command_runner.go:124] > # The port on which the metrics server will listen.
	I0813 20:24:50.554468    4908 command_runner.go:124] > metrics_port = 9090
	I0813 20:24:50.554516    4908 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0813 20:24:50.554529    4908 command_runner.go:124] > metrics_socket = ""
	I0813 20:24:50.554605    4908 cni.go:93] Creating CNI manager for ""
	I0813 20:24:50.554620    4908 cni.go:154] 1 nodes found, recommending kindnet
	I0813 20:24:50.554631    4908 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:24:50.554646    4908 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.64 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210813202419-30853 NodeName:multinode-20210813202419-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.64 CgroupDriver:systemd ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:24:50.554827    4908 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210813202419-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:24:50.555250    4908 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=multinode-20210813202419-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.64 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813202419-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:24:50.555317    4908 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:24:50.563047    4908 command_runner.go:124] > kubeadm
	I0813 20:24:50.563059    4908 command_runner.go:124] > kubectl
	I0813 20:24:50.563063    4908 command_runner.go:124] > kubelet
	I0813 20:24:50.563439    4908 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:24:50.563508    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:24:50.570579    4908 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (510 bytes)
	I0813 20:24:50.582471    4908 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:24:50.594516    4908 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I0813 20:24:50.606196    4908 ssh_runner.go:149] Run: grep 192.168.39.64	control-plane.minikube.internal$ /etc/hosts
	I0813 20:24:50.610130    4908 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:24:50.620699    4908 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853 for IP: 192.168.39.64
	I0813 20:24:50.620757    4908 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:24:50.620779    4908 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:24:50.620826    4908 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.key
	I0813 20:24:50.620843    4908 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.crt with IP's: []
	I0813 20:24:51.344548    4908 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.crt ...
	I0813 20:24:51.344584    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.crt: {Name:mka1ee370ee925eb7e1501675df2e7ea7e3c224f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:24:51.344790    4908 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.key ...
	I0813 20:24:51.344808    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.key: {Name:mk15171e52061b2035f15d5434f676c84c199eb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:24:51.344899    4908 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.key.b878b390
	I0813 20:24:51.344911    4908 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.crt.b878b390 with IP's: [192.168.39.64 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:24:51.395270    4908 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.crt.b878b390 ...
	I0813 20:24:51.395299    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.crt.b878b390: {Name:mk754558e7785498ea66501b23a82045536b3325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:24:51.395463    4908 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.key.b878b390 ...
	I0813 20:24:51.395475    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.key.b878b390: {Name:mke02b58094314a18ba8e3f83b2c71c941e182ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:24:51.395554    4908 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.crt.b878b390 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.crt
	I0813 20:24:51.395662    4908 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.key.b878b390 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.key
	I0813 20:24:51.395724    4908 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.key
	I0813 20:24:51.395732    4908 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.crt with IP's: []
	I0813 20:24:51.652494    4908 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.crt ...
	I0813 20:24:51.652528    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.crt: {Name:mk684c6e1d6b08d32f001fa3d1e79a30161eb9d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:24:51.652711    4908 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.key ...
	I0813 20:24:51.652726    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.key: {Name:mkcbcdb7473a58f8d14621a8dede511c86f24c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:24:51.652806    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0813 20:24:51.652822    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0813 20:24:51.652831    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0813 20:24:51.652840    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0813 20:24:51.652852    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0813 20:24:51.652862    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0813 20:24:51.652874    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0813 20:24:51.652884    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0813 20:24:51.652932    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 20:24:51.652967    4908 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 20:24:51.652982    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:24:51.653007    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:24:51.653030    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:24:51.653054    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:24:51.653096    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:24:51.653123    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:24:51.653136    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem -> /usr/share/ca-certificates/30853.pem
	I0813 20:24:51.653145    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> /usr/share/ca-certificates/308532.pem
	I0813 20:24:51.653944    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:24:51.674456    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:24:51.692244    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:24:51.709106    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:24:51.725960    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:24:51.742359    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:24:51.758198    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:24:51.774580    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:24:51.790864    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:24:51.806953    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 20:24:51.824402    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 20:24:51.841914    4908 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:24:51.853430    4908 ssh_runner.go:149] Run: openssl version
	I0813 20:24:51.858510    4908 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0813 20:24:51.859107    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:24:51.866399    4908 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:24:51.870523    4908 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:24:51.870935    4908 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:24:51.870979    4908 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:24:51.876366    4908 command_runner.go:124] > b5213941
	I0813 20:24:51.876428    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:24:51.885234    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 20:24:51.892403    4908 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 20:24:51.896812    4908 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:24:51.896833    4908 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:24:51.896864    4908 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 20:24:51.901946    4908 command_runner.go:124] > 51391683
	I0813 20:24:51.902276    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 20:24:51.909497    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 20:24:51.916917    4908 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 20:24:51.921077    4908 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:24:51.921328    4908 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:24:51.921381    4908 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 20:24:51.926815    4908 command_runner.go:124] > 3ec20f2e
	I0813 20:24:51.926937    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:24:51.934097    4908 kubeadm.go:390] StartCluster: {Name:multinode-20210813202419-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3
ClusterName:multinode-20210813202419-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 20:24:51.934172    4908 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:24:51.934204    4908 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:24:51.967438    4908 cri.go:76] found id: ""
	I0813 20:24:51.967505    4908 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:24:51.974346    4908 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0813 20:24:51.974372    4908 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0813 20:24:51.974383    4908 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0813 20:24:51.974493    4908 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:24:51.980818    4908 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:24:51.987175    4908 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0813 20:24:51.987198    4908 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0813 20:24:51.987206    4908 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0813 20:24:51.987214    4908 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:24:51.987242    4908 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:24:51.987277    4908 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 20:24:52.150812    4908 command_runner.go:124] > [init] Using Kubernetes version: v1.21.3
	I0813 20:24:52.150903    4908 command_runner.go:124] > [preflight] Running pre-flight checks
	I0813 20:24:52.477726    4908 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0813 20:24:52.477866    4908 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0813 20:24:52.478009    4908 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0813 20:24:52.705047    4908 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0813 20:24:52.802691    4908 out.go:204]   - Generating certificates and keys ...
	I0813 20:24:52.802809    4908 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0813 20:24:52.802913    4908 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0813 20:24:52.894346    4908 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0813 20:24:53.087905    4908 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key
	I0813 20:24:53.191891    4908 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key
	I0813 20:24:53.331526    4908 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key
	I0813 20:24:53.677101    4908 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key
	I0813 20:24:53.678093    4908 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20210813202419-30853] and IPs [192.168.39.64 127.0.0.1 ::1]
	I0813 20:24:53.787935    4908 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key
	I0813 20:24:53.788292    4908 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20210813202419-30853] and IPs [192.168.39.64 127.0.0.1 ::1]
	I0813 20:24:54.106364    4908 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0813 20:24:54.223555    4908 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0813 20:24:54.306956    4908 command_runner.go:124] > [certs] Generating "sa" key and public key
	I0813 20:24:54.307322    4908 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0813 20:24:54.491373    4908 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0813 20:24:54.683168    4908 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0813 20:24:54.799101    4908 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0813 20:24:54.906630    4908 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0813 20:24:54.932231    4908 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 20:24:54.933513    4908 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 20:24:54.933570    4908 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0813 20:24:55.105952    4908 out.go:204]   - Booting up control plane ...
	I0813 20:24:55.104142    4908 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0813 20:24:55.106083    4908 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0813 20:24:55.118658    4908 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0813 20:24:55.119663    4908 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0813 20:24:55.120478    4908 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0813 20:24:55.128338    4908 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0813 20:25:11.128024    4908 command_runner.go:124] > [apiclient] All control plane components are healthy after 16.005267 seconds
	I0813 20:25:11.128159    4908 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0813 20:25:11.167559    4908 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
	I0813 20:25:11.706952    4908 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs
	I0813 20:25:11.707291    4908 command_runner.go:124] > [mark-control-plane] Marking the node multinode-20210813202419-30853 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0813 20:25:12.221982    4908 out.go:204]   - Configuring RBAC rules ...
	I0813 20:25:12.220526    4908 command_runner.go:124] > [bootstrap-token] Using token: 6rribx.g18moxouefc7yp35
	I0813 20:25:12.222112    4908 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0813 20:25:12.230255    4908 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0813 20:25:12.248069    4908 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0813 20:25:12.255826    4908 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0813 20:25:12.259793    4908 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0813 20:25:12.265826    4908 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0813 20:25:12.283294    4908 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0813 20:25:12.719703    4908 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0813 20:25:12.786682    4908 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0813 20:25:12.791765    4908 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully!
	I0813 20:25:12.791863    4908 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user:
	I0813 20:25:12.791903    4908 command_runner.go:124] >   mkdir -p $HOME/.kube
	I0813 20:25:12.792048    4908 command_runner.go:124] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0813 20:25:12.792141    4908 command_runner.go:124] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0813 20:25:12.792238    4908 command_runner.go:124] > Alternatively, if you are the root user, you can run:
	I0813 20:25:12.792285    4908 command_runner.go:124] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0813 20:25:12.792345    4908 command_runner.go:124] > You should now deploy a pod network to the cluster.
	I0813 20:25:12.792452    4908 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0813 20:25:12.792543    4908 command_runner.go:124] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0813 20:25:12.792683    4908 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities
	I0813 20:25:12.792749    4908 command_runner.go:124] > and service account keys on each node and then running the following as root:
	I0813 20:25:12.792821    4908 command_runner.go:124] >   kubeadm join control-plane.minikube.internal:8443 --token 6rribx.g18moxouefc7yp35 \
	I0813 20:25:12.792922    4908 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:00d93bc1122e8abafdd2223d172c3617c6ca5e75fcbdac147810f69b6f47ae9b \
	I0813 20:25:12.792957    4908 command_runner.go:124] > 	--control-plane 
	I0813 20:25:12.793052    4908 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root:
	I0813 20:25:12.793126    4908 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token 6rribx.g18moxouefc7yp35 \
	I0813 20:25:12.793257    4908 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:00d93bc1122e8abafdd2223d172c3617c6ca5e75fcbdac147810f69b6f47ae9b 
	I0813 20:25:12.797355    4908 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 20:25:12.798324    4908 cni.go:93] Creating CNI manager for ""
	I0813 20:25:12.798352    4908 cni.go:154] 1 nodes found, recommending kindnet
	I0813 20:25:12.800148    4908 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:25:12.800219    4908 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:25:12.809588    4908 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0813 20:25:12.809625    4908 command_runner.go:124] >   Size: 2853400   	Blocks: 5576       IO Block: 4096   regular file
	I0813 20:25:12.809633    4908 command_runner.go:124] > Device: 10h/16d	Inode: 22875       Links: 1
	I0813 20:25:12.809644    4908 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 20:25:12.809657    4908 command_runner.go:124] > Access: 2021-08-13 20:24:33.325091489 +0000
	I0813 20:25:12.809670    4908 command_runner.go:124] > Modify: 2021-08-10 20:02:08.000000000 +0000
	I0813 20:25:12.809678    4908 command_runner.go:124] > Change: 2021-08-13 20:24:29.381091489 +0000
	I0813 20:25:12.809682    4908 command_runner.go:124] >  Birth: -
	I0813 20:25:12.809726    4908 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:25:12.809739    4908 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:25:12.826189    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:25:13.371212    4908 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0813 20:25:13.371243    4908 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0813 20:25:13.371270    4908 command_runner.go:124] > serviceaccount/kindnet created
	I0813 20:25:13.371277    4908 command_runner.go:124] > daemonset.apps/kindnet created
	I0813 20:25:13.372077    4908 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:25:13.372152    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:13.372165    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=multinode-20210813202419-30853 minikube.k8s.io/updated_at=2021_08_13T20_25_13_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:13.398902    4908 command_runner.go:124] > -16
	I0813 20:25:13.543665    4908 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0813 20:25:13.543745    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:13.543798    4908 command_runner.go:124] > node/multinode-20210813202419-30853 labeled
	I0813 20:25:13.543837    4908 ops.go:34] apiserver oom_adj: -16
	I0813 20:25:13.644100    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:14.144960    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:14.251465    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:14.644917    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:14.758484    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:15.144974    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:15.251636    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:15.644645    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:15.761083    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:16.144985    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:16.262175    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:16.644790    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:16.751945    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:17.144514    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:17.243537    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:17.644808    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:17.750454    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:18.145181    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:18.457392    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:18.644918    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:18.769135    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:19.145018    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:19.264331    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:19.644391    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:19.756377    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:20.145045    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:20.256966    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:20.644536    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:20.754430    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:21.145141    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:21.253463    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:21.645101    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:21.756729    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:22.145182    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:22.243392    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:22.645133    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:22.748965    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:23.145058    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:23.248005    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:23.645177    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:23.833689    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:24.144894    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:24.334176    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:24.644535    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:24.755823    4908 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:25.144379    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:25.416222    4908 command_runner.go:124] > NAME      SECRETS   AGE
	I0813 20:25:25.416248    4908 command_runner.go:124] > default   1         0s
	I0813 20:25:25.416271    4908 kubeadm.go:985] duration metric: took 12.04420308s to wait for elevateKubeSystemPrivileges.
	I0813 20:25:25.416286    4908 kubeadm.go:392] StartCluster complete in 33.482194053s
	I0813 20:25:25.416307    4908 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:25:25.416448    4908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:25:25.417295    4908 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:25:25.417861    4908 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:25:25.418155    4908 kapi.go:59] client config for multinode-20210813202419-30853: &rest.Config{Host:"https://192.168.39.64:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-308
53/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:25:25.418644    4908 cert_rotation.go:137] Starting client certificate rotation controller
	I0813 20:25:25.419820    4908 round_trippers.go:432] GET https://192.168.39.64:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 20:25:25.419836    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:25.419841    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:25.419844    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:25.429297    4908 round_trippers.go:457] Response Status: 200 OK in 9 milliseconds
	I0813 20:25:25.429313    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:25.429319    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:25.429324    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:25.429329    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:25.429333    4908 round_trippers.go:463]     Content-Length: 291
	I0813 20:25:25.429338    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:25 GMT
	I0813 20:25:25.429342    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:25.429366    4908 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"73e8c4a8-95d5-4c3a-b449-cc0cea21354a","resourceVersion":"440","creationTimestamp":"2021-08-13T20:25:12Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0813 20:25:25.430157    4908 request.go:1123] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"73e8c4a8-95d5-4c3a-b449-cc0cea21354a","resourceVersion":"440","creationTimestamp":"2021-08-13T20:25:12Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0813 20:25:25.430214    4908 round_trippers.go:432] PUT https://192.168.39.64:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 20:25:25.430233    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:25.430240    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:25.430247    4908 round_trippers.go:442]     Content-Type: application/json
	I0813 20:25:25.430254    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:25.434776    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:25.434791    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:25.434796    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:25.434799    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:25.434802    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:25.434805    4908 round_trippers.go:463]     Content-Length: 291
	I0813 20:25:25.434808    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:25 GMT
	I0813 20:25:25.434811    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:25.434825    4908 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"73e8c4a8-95d5-4c3a-b449-cc0cea21354a","resourceVersion":"442","creationTimestamp":"2021-08-13T20:25:12Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0813 20:25:25.935652    4908 round_trippers.go:432] GET https://192.168.39.64:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 20:25:25.935679    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:25.935690    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:25.935694    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:25.939785    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:25.939802    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:25.939806    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:25.939809    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:25.939812    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:25.939815    4908 round_trippers.go:463]     Content-Length: 291
	I0813 20:25:25.939818    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:25 GMT
	I0813 20:25:25.939821    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:25.939839    4908 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"73e8c4a8-95d5-4c3a-b449-cc0cea21354a","resourceVersion":"453","creationTimestamp":"2021-08-13T20:25:12Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0813 20:25:25.939926    4908 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20210813202419-30853" rescaled to 1
	I0813 20:25:25.939976    4908 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:25:25.939985    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:25:25.951751    4908 out.go:177] * Verifying Kubernetes components...
	I0813 20:25:25.940078    4908 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:25:25.940215    4908 config.go:177] Loaded profile config "multinode-20210813202419-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:25:25.951839    4908 addons.go:59] Setting storage-provisioner=true in profile "multinode-20210813202419-30853"
	I0813 20:25:25.951869    4908 addons.go:135] Setting addon storage-provisioner=true in "multinode-20210813202419-30853"
	W0813 20:25:25.951876    4908 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:25:25.951830    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:25:25.951911    4908 host.go:66] Checking if "multinode-20210813202419-30853" exists ...
	I0813 20:25:25.951848    4908 addons.go:59] Setting default-storageclass=true in profile "multinode-20210813202419-30853"
	I0813 20:25:25.951970    4908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210813202419-30853"
	I0813 20:25:25.952434    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:25:25.952445    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:25:25.952485    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:25:25.952523    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:25:25.963548    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0813 20:25:25.963992    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:25:25.964452    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:25:25.964475    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:25:25.964826    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:25:25.964990    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetState
	I0813 20:25:25.968250    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41769
	I0813 20:25:25.968666    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:25:25.969156    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:25:25.969185    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:25:25.969370    4908 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:25:25.969539    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:25:25.969663    4908 kapi.go:59] client config for multinode-20210813202419-30853: &rest.Config{Host:"https://192.168.39.64:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-308
53/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:25:25.969995    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:25:25.970033    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:25:25.971099    4908 round_trippers.go:432] GET https://192.168.39.64:8443/apis/storage.k8s.io/v1/storageclasses
	I0813 20:25:25.971114    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:25.971123    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:25.971129    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:25.975871    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:25.975885    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:25.975889    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:25.975893    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:25.975902    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:25.975909    4908 round_trippers.go:463]     Content-Length: 109
	I0813 20:25:25.975914    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:25 GMT
	I0813 20:25:25.975919    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:25.975989    4908 request.go:1123] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"453"},"items":[]}
	I0813 20:25:25.976521    4908 addons.go:135] Setting addon default-storageclass=true in "multinode-20210813202419-30853"
	W0813 20:25:25.976538    4908 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:25:25.976567    4908 host.go:66] Checking if "multinode-20210813202419-30853" exists ...
	I0813 20:25:25.976849    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:25:25.976891    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:25:25.980405    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36775
	I0813 20:25:25.980839    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:25:25.981331    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:25:25.981353    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:25:25.981816    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:25:25.982000    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetState
	I0813 20:25:25.984919    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:25:25.986829    4908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:25:25.986985    4908 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:25:25.987002    4908 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:25:25.987020    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:25:25.987783    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36583
	I0813 20:25:25.988196    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:25:25.988659    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:25:25.988682    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:25:25.988993    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:25:25.989553    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:25:25.989603    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:25:25.992706    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:25:25.993085    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:25:25.993116    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:25:25.993249    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:25:25.993401    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:25:25.993524    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:25:25.993634    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:25:26.000649    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34563
	I0813 20:25:26.001002    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:25:26.001394    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:25:26.001415    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:25:26.001723    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:25:26.001896    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetState
	I0813 20:25:26.004525    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:25:26.004735    4908 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:25:26.004755    4908 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:25:26.004773    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:25:26.010190    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:25:26.010600    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:25:26.010635    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:25:26.010752    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:25:26.010913    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:25:26.011060    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:25:26.011187    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:25:26.233408    4908 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:25:26.250861    4908 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:25:26.266369    4908 command_runner.go:124] > apiVersion: v1
	I0813 20:25:26.266391    4908 command_runner.go:124] > data:
	I0813 20:25:26.266396    4908 command_runner.go:124] >   Corefile: |
	I0813 20:25:26.266399    4908 command_runner.go:124] >     .:53 {
	I0813 20:25:26.266403    4908 command_runner.go:124] >         errors
	I0813 20:25:26.266408    4908 command_runner.go:124] >         health {
	I0813 20:25:26.266413    4908 command_runner.go:124] >            lameduck 5s
	I0813 20:25:26.266417    4908 command_runner.go:124] >         }
	I0813 20:25:26.266421    4908 command_runner.go:124] >         ready
	I0813 20:25:26.266428    4908 command_runner.go:124] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0813 20:25:26.266438    4908 command_runner.go:124] >            pods insecure
	I0813 20:25:26.266445    4908 command_runner.go:124] >            fallthrough in-addr.arpa ip6.arpa
	I0813 20:25:26.266450    4908 command_runner.go:124] >            ttl 30
	I0813 20:25:26.266455    4908 command_runner.go:124] >         }
	I0813 20:25:26.266459    4908 command_runner.go:124] >         prometheus :9153
	I0813 20:25:26.266465    4908 command_runner.go:124] >         forward . /etc/resolv.conf {
	I0813 20:25:26.266470    4908 command_runner.go:124] >            max_concurrent 1000
	I0813 20:25:26.266475    4908 command_runner.go:124] >         }
	I0813 20:25:26.266479    4908 command_runner.go:124] >         cache 30
	I0813 20:25:26.266482    4908 command_runner.go:124] >         loop
	I0813 20:25:26.266486    4908 command_runner.go:124] >         reload
	I0813 20:25:26.266490    4908 command_runner.go:124] >         loadbalance
	I0813 20:25:26.266493    4908 command_runner.go:124] >     }
	I0813 20:25:26.266498    4908 command_runner.go:124] > kind: ConfigMap
	I0813 20:25:26.266501    4908 command_runner.go:124] > metadata:
	I0813 20:25:26.266513    4908 command_runner.go:124] >   creationTimestamp: "2021-08-13T20:25:12Z"
	I0813 20:25:26.266521    4908 command_runner.go:124] >   name: coredns
	I0813 20:25:26.266528    4908 command_runner.go:124] >   namespace: kube-system
	I0813 20:25:26.266535    4908 command_runner.go:124] >   resourceVersion: "281"
	I0813 20:25:26.266544    4908 command_runner.go:124] >   uid: 926124bc-ba0a-4974-ac80-8723f8307429
	I0813 20:25:26.272432    4908 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:25:26.272652    4908 kapi.go:59] client config for multinode-20210813202419-30853: &rest.Config{Host:"https://192.168.39.64:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-308
53/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:25:26.272894    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:25:26.273876    4908 node_ready.go:35] waiting up to 6m0s for node "multinode-20210813202419-30853" to be "Ready" ...
	I0813 20:25:26.273943    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:26.273951    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:26.273956    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:26.273960    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:26.276760    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:26.276773    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:26.276778    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:26 GMT
	I0813 20:25:26.276783    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:26.276787    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:26.276790    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:26.276795    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:26.277481    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:26.279715    4908 node_ready.go:49] node "multinode-20210813202419-30853" has status "Ready":"True"
	I0813 20:25:26.279731    4908 node_ready.go:38] duration metric: took 5.835411ms waiting for node "multinode-20210813202419-30853" to be "Ready" ...
	I0813 20:25:26.279738    4908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:25:26.279795    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods
	I0813 20:25:26.279804    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:26.279809    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:26.279813    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:26.283212    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:26.283229    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:26.283236    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:26.283240    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:26.283246    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:26.283251    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:26 GMT
	I0813 20:25:26.283256    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:26.284034    4908 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"434","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 53650 chars]
	I0813 20:25:26.289384    4908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:26.289452    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:26.289465    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:26.289471    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:26.289475    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:26.292557    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:26.292580    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:26.292587    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:26.292593    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:26.292599    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:26.292604    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:26 GMT
	I0813 20:25:26.292609    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:26.292831    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"434","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4268 chars]
	I0813 20:25:26.298289    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:26.298318    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:26.298326    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:26.298333    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:26.300670    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:26.300685    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:26.300689    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:26.300692    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:26.300695    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:26.300698    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:26.300701    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:26 GMT
	I0813 20:25:26.300903    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:26.801597    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:26.801627    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:26.801635    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:26.801641    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:26.804860    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:26.804882    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:26.804889    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:26 GMT
	I0813 20:25:26.804895    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:26.804900    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:26.804905    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:26.804911    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:26.805028    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"455","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 20:25:26.805437    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:26.805462    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:26.805469    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:26.805483    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:26.807768    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:26.807789    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:26.807795    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:26.807800    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:26.807804    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:26.807808    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:26 GMT
	I0813 20:25:26.807813    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:26.807988    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:27.301610    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:27.301640    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:27.301647    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:27.301652    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:27.305200    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:27.305225    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:27.305231    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:27.305236    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:27.305241    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:27.305245    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:27.305249    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:27 GMT
	I0813 20:25:27.305773    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"455","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 20:25:27.306227    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:27.306248    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:27.306255    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:27.306260    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:27.308537    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:27.308551    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:27.308556    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:27.308559    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:27 GMT
	I0813 20:25:27.308562    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:27.308567    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:27.308571    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:27.308925    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:27.801623    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:27.801659    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:27.801668    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:27.801674    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:27.804707    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:27.804732    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:27.804737    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:27.804740    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:27.804743    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:27.804746    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:27.804750    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:27 GMT
	I0813 20:25:27.804907    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"455","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 20:25:27.805342    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:27.805364    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:27.805372    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:27.805377    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:27.808023    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:27.808034    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:27.808038    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:27.808041    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:27.808044    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:27.808047    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:27.808049    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:27 GMT
	I0813 20:25:27.808384    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:28.142997    4908 command_runner.go:124] > storageclass.storage.k8s.io/standard created
	I0813 20:25:28.148693    4908 command_runner.go:124] > serviceaccount/storage-provisioner created
	I0813 20:25:28.148722    4908 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0813 20:25:28.148793    4908 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.915348399s)
	I0813 20:25:28.148843    4908 main.go:130] libmachine: Making call to close driver server
	I0813 20:25:28.148864    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .Close
	I0813 20:25:28.149152    4908 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:25:28.149210    4908 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:25:28.149227    4908 main.go:130] libmachine: Making call to close driver server
	I0813 20:25:28.149237    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .Close
	I0813 20:25:28.149509    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | Closing plugin on server side
	I0813 20:25:28.149551    4908 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:25:28.149562    4908 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:25:28.149575    4908 main.go:130] libmachine: Making call to close driver server
	I0813 20:25:28.149587    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .Close
	I0813 20:25:28.149857    4908 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:25:28.149878    4908 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:25:28.154782    4908 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0813 20:25:28.170480    4908 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0813 20:25:28.182623    4908 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created
	I0813 20:25:28.202105    4908 command_runner.go:124] > pod/storage-provisioner created
	I0813 20:25:28.204764    4908 command_runner.go:124] > configmap/coredns replaced
	I0813 20:25:28.204795    4908 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.931877788s)
	I0813 20:25:28.204809    4908 start.go:728] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 20:25:28.204923    4908 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.954041212s)
	I0813 20:25:28.204954    4908 main.go:130] libmachine: Making call to close driver server
	I0813 20:25:28.204965    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .Close
	I0813 20:25:28.205213    4908 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:25:28.205235    4908 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:25:28.205246    4908 main.go:130] libmachine: Making call to close driver server
	I0813 20:25:28.205256    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .Close
	I0813 20:25:28.205478    4908 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:25:28.205494    4908 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:25:28.207371    4908 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0813 20:25:28.207397    4908 addons.go:344] enableAddons completed in 2.267327105s
	I0813 20:25:28.301769    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:28.301791    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:28.301797    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:28.301801    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:28.311305    4908 round_trippers.go:457] Response Status: 200 OK in 9 milliseconds
	I0813 20:25:28.311330    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:28.311338    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:28.311344    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:28 GMT
	I0813 20:25:28.311349    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:28.311355    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:28.311360    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:28.311529    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"455","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 20:25:28.311965    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:28.311995    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:28.312003    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:28.312010    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:28.320767    4908 round_trippers.go:457] Response Status: 200 OK in 8 milliseconds
	I0813 20:25:28.320787    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:28.320794    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:28.320801    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:28.320808    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:28.320813    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:28.320822    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:28 GMT
	I0813 20:25:28.321083    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:28.321438    4908 pod_ready.go:102] pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace has status "Ready":"False"
	I0813 20:25:28.801730    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:28.801760    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:28.801768    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:28.801775    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:28.804548    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:28.804572    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:28.804579    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:28.804583    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:28.804588    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:28.804592    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:28 GMT
	I0813 20:25:28.804596    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:28.804781    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"455","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 20:25:28.805311    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:28.805329    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:28.805336    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:28.805342    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:28.808044    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:28.808062    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:28.808069    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:28.808073    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:28.808078    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:28.808082    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:28.808087    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:28 GMT
	I0813 20:25:28.808746    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:29.302322    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:29.302354    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:29.302363    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:29.302370    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:29.306794    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:29.306814    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:29.306819    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:29.306823    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:29.306828    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:29.306832    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:29 GMT
	I0813 20:25:29.306836    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:29.307190    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"455","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 20:25:29.307520    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:29.307532    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:29.307537    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:29.307541    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:29.309520    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:29.309539    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:29.309544    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:29.309549    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:29.309553    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:29.309558    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:29.309563    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:29 GMT
	I0813 20:25:29.309916    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:29.801509    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:29.801533    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:29.801538    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:29.801542    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:29.807276    4908 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 20:25:29.807294    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:29.807299    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:29.807302    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:29.807306    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:29 GMT
	I0813 20:25:29.807309    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:29.807311    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:29.807478    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"484","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5962 chars]
	I0813 20:25:29.808065    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:29.808089    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:29.808096    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:29.808102    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:29.810517    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:29.810533    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:29.810538    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:29.810543    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:29.810547    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:29.810552    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:29.810556    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:29 GMT
	I0813 20:25:29.810658    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:30.302310    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:30.302339    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:30.302346    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:30.302352    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:30.305420    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:30.305435    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:30.305440    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:30.305445    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:30 GMT
	I0813 20:25:30.305450    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:30.305454    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:30.305458    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:30.306099    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"484","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5962 chars]
	I0813 20:25:30.306488    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:30.306506    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:30.306511    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:30.306515    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:30.309527    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:30.309540    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:30.309545    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:30.309548    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:30 GMT
	I0813 20:25:30.309551    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:30.309554    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:30.309557    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:30.309934    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:30.801554    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:30.801577    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:30.801583    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:30.801587    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:30.804052    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:30.804069    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:30.804076    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:30.804081    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:30.804086    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:30.804091    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:30.804095    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:30 GMT
	I0813 20:25:30.804584    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"484","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5962 chars]
	I0813 20:25:30.804997    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:30.805016    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:30.805022    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:30.805028    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:30.807354    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:30.807365    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:30.807369    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:30.807372    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:30.807375    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:30.807378    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:30.807381    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:30 GMT
	I0813 20:25:30.807571    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:30.807885    4908 pod_ready.go:102] pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace has status "Ready":"False"
	I0813 20:25:31.301912    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:31.301936    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:31.301941    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:31.301945    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:31.306270    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:31.306292    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:31.306301    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:31.306306    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:31.306310    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:31.306314    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:31.306319    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:31 GMT
	I0813 20:25:31.306656    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"484","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5962 chars]
	I0813 20:25:31.307097    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:31.307115    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:31.307122    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:31.307128    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:31.309737    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:31.309753    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:31.309761    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:31.309766    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:31.309770    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:31.309775    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:31.309779    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:31 GMT
	I0813 20:25:31.309855    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:31.801440    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:31.801470    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:31.801477    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:31.801484    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:31.804630    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:31.804650    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:31.804656    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:31.804661    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:31 GMT
	I0813 20:25:31.804665    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:31.804670    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:31.804674    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:31.804755    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"484","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5962 chars]
	I0813 20:25:31.805087    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:31.805103    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:31.805109    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:31.805115    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:31.808371    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:31.808391    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:31.808397    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:31.808402    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:31.808405    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:31.808409    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:31.808412    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:31 GMT
	I0813 20:25:31.808491    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:32.302122    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:32.302155    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:32.302162    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:32.302168    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:32.305946    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:32.305959    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:32.305965    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:32.305970    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:32.305974    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:32.305979    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:32.305984    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:32 GMT
	I0813 20:25:32.306325    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"484","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5962 chars]
	I0813 20:25:32.306651    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:32.306666    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:32.306675    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:32.306688    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:32.308656    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:32.308673    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:32.308678    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:32.308681    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:32 GMT
	I0813 20:25:32.308684    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:32.308687    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:32.308690    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:32.308926    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:32.801598    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:25:32.801628    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:32.801636    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:32.801642    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:32.805937    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:32.805961    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:32.805967    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:32.805975    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:32.805991    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:32.805999    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:32.806003    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:32 GMT
	I0813 20:25:32.806475    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"499","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5733 chars]
	I0813 20:25:32.806977    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:32.807001    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:32.807009    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:32.807016    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:32.810106    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:32.810125    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:32.810131    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:32.810136    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:32.810140    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:32.810145    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:32.810150    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:32 GMT
	I0813 20:25:32.810332    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:32.810685    4908 pod_ready.go:92] pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace has status "Ready":"True"
	I0813 20:25:32.810711    4908 pod_ready.go:81] duration metric: took 6.521303635s waiting for pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:32.810726    4908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-nnsgn" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:32.810796    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:32.810810    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:32.810816    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:32.810821    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:32.814812    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:32.814830    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:32.814836    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:32.814841    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:32.814845    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:32 GMT
	I0813 20:25:32.814865    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:32.814870    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:32.816054    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:32.816408    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:32.816423    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:32.816429    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:32.816433    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:32.819832    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:32.819850    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:32.819859    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:32.819863    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:32.819868    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:32.819872    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:32.819877    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:32 GMT
	I0813 20:25:32.820682    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:33.321758    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:33.321784    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:33.321790    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:33.321794    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:33.324800    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:33.324823    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:33.324830    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:33.324835    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:33.324839    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:33.324844    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:33.324848    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:33 GMT
	I0813 20:25:33.325035    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:33.325506    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:33.325530    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:33.325536    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:33.325540    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:33.327692    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:33.327711    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:33.327718    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:33.327723    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:33.327728    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:33.327733    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:33.327751    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:33 GMT
	I0813 20:25:33.328140    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:33.821834    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:33.821860    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:33.821866    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:33.821871    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:33.828588    4908 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0813 20:25:33.828613    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:33.828620    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:33.828625    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:33.828628    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:33.828631    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:33.828636    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:33 GMT
	I0813 20:25:33.829958    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:33.830380    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:33.830397    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:33.830403    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:33.830407    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:33.832893    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:33.832915    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:33.832922    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:33.832926    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:33 GMT
	I0813 20:25:33.832931    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:33.832936    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:33.832940    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:33.833136    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:34.321464    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:34.321493    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:34.321499    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:34.321503    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:34.324285    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:34.324309    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:34.324315    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:34.324320    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:34.324324    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:34.324330    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:34.324334    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:34 GMT
	I0813 20:25:34.324568    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:34.324918    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:34.324936    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:34.324942    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:34.324948    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:34.328440    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:34.328457    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:34.328463    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:34.328468    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:34.328472    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:34.328477    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:34.328481    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:34 GMT
	I0813 20:25:34.328952    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:34.821592    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:34.821619    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:34.821625    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:34.821629    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:34.824680    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:34.824697    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:34.824702    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:34.824705    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:34.824709    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:34.824712    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:34.824715    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:34 GMT
	I0813 20:25:34.824805    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:34.825202    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:34.825220    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:34.825227    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:34.825234    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:34.828083    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:34.828104    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:34.828109    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:34.828112    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:34.828115    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:34.828118    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:34.828121    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:34 GMT
	I0813 20:25:34.828281    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:34.828526    4908 pod_ready.go:102] pod "coredns-558bd4d5db-nnsgn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:25:35.321987    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:35.322012    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:35.322017    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:35.322022    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:35.328029    4908 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 20:25:35.328049    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:35.328055    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:35.328060    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:35.328064    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:35.328068    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:35.328072    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:35 GMT
	I0813 20:25:35.328950    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:35.329294    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:35.329311    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:35.329317    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:35.329323    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:35.333158    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:35.333176    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:35.333182    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:35.333187    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:35.333191    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:35.333194    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:35.333197    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:35 GMT
	I0813 20:25:35.334216    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:35.822051    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:35.822078    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:35.822087    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:35.822091    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:35.826605    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:35.826628    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:35.826634    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:35.826639    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:35 GMT
	I0813 20:25:35.826643    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:35.826646    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:35.826649    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:35.827191    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:35.827509    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:35.827521    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:35.827526    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:35.827530    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:35.830085    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:35.830100    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:35.830106    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:35 GMT
	I0813 20:25:35.830110    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:35.830115    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:35.830120    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:35.830123    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:35.830664    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:36.321865    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:36.321891    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:36.321896    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:36.321901    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:36.325423    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:36.325442    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:36.325448    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:36.325453    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:36.325457    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:36.325461    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:36.325465    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:36 GMT
	I0813 20:25:36.325905    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:36.326264    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:36.326315    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:36.326339    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:36.326346    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:36.329645    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:36.329661    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:36.329665    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:36.329669    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:36.329672    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:36.329674    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:36.329678    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:36 GMT
	I0813 20:25:36.329856    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:36.821488    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:36.821515    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:36.821523    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:36.821528    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:36.824589    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:36.824625    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:36.824633    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:36.824637    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:36.824642    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:36 GMT
	I0813 20:25:36.824646    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:36.824654    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:36.825014    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:36.825359    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:36.825375    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:36.825380    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:36.825384    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:36.827882    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:36.827905    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:36.827912    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:36.827917    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:36.827921    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:36.827925    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:36.827930    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:36 GMT
	I0813 20:25:36.828088    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:37.321802    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:37.321827    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:37.321833    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:37.321837    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:37.326481    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:37.326498    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:37.326504    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:37.326507    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:37.326510    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:37.326513    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:37.326516    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:37 GMT
	I0813 20:25:37.326717    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:37.327189    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:37.327210    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:37.327217    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:37.327223    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:37.329586    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:37.329603    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:37.329608    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:37.329613    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:37 GMT
	I0813 20:25:37.329619    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:37.329624    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:37.329628    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:37.329852    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:37.330251    4908 pod_ready.go:102] pod "coredns-558bd4d5db-nnsgn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:25:37.821454    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:37.821484    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:37.821492    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:37.821498    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:37.826053    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:37.826071    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:37.826076    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:37.826079    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:37.826082    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:37.826085    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:37.826088    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:37 GMT
	I0813 20:25:37.826269    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:37.826627    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:37.826644    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:37.826649    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:37.826657    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:37.829440    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:37.829450    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:37.829454    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:37.829457    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:37.829460    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:37 GMT
	I0813 20:25:37.829463    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:37.829466    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:37.829760    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:38.321428    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:38.321455    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.321461    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.321465    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.324410    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:38.324425    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.324430    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.324433    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.324436    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.324439    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.324444    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.324775    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-nnsgn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"13726b49-920a-4c21-9f87-b969630657c6","resourceVersion":"450","creationTimestamp":"2021-08-13T20:25:25Z","deletionTimestamp":"2021-08-13T20:25:55Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5703 chars]
	I0813 20:25:38.325171    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:38.325193    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.325200    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.325207    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.327948    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:38.327959    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.327963    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.327966    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.327969    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.327972    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.327975    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.328511    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:38.821159    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-nnsgn
	I0813 20:25:38.821185    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.821190    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.821195    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.823634    4908 round_trippers.go:457] Response Status: 404 Not Found in 2 milliseconds
	I0813 20:25:38.823645    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.823649    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.823657    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.823662    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.823666    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.823673    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.823677    4908 round_trippers.go:463]     Content-Length: 216
	I0813 20:25:38.823911    4908 request.go:1123] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-558bd4d5db-nnsgn\" not found","reason":"NotFound","details":{"name":"coredns-558bd4d5db-nnsgn","kind":"pods"},"code":404}
	I0813 20:25:38.824430    4908 pod_ready.go:97] error getting pod "coredns-558bd4d5db-nnsgn" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-nnsgn" not found
	I0813 20:25:38.824456    4908 pod_ready.go:81] duration metric: took 6.013715908s waiting for pod "coredns-558bd4d5db-nnsgn" in "kube-system" namespace to be "Ready" ...
	E0813 20:25:38.824466    4908 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-nnsgn" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-nnsgn" not found
	I0813 20:25:38.824475    4908 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.824563    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210813202419-30853
	I0813 20:25:38.824576    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.824583    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.824589    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.827235    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:38.827265    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.827270    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.827273    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.827277    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.827279    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.827282    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.827427    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210813202419-30853","namespace":"kube-system","uid":"0e8c51de-4800-4c2d-af81-4f4f197d3cd5","resourceVersion":"491","creationTimestamp":"2021-08-13T20:25:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.64:2379","kubernetes.io/config.hash":"b2e5f07a9c29a3554b1f5628928cde4b","kubernetes.io/config.mirror":"b2e5f07a9c29a3554b1f5628928cde4b","kubernetes.io/config.seen":"2021-08-13T20:25:00.776305134Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5569 chars]
	I0813 20:25:38.827726    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:38.827739    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.827744    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.827748    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.830263    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:38.830275    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.830279    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.830282    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.830285    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.830288    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.830290    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.830967    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:38.831278    4908 pod_ready.go:92] pod "etcd-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:25:38.831296    4908 pod_ready.go:81] duration metric: took 6.782033ms waiting for pod "etcd-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.831311    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.831365    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210813202419-30853
	I0813 20:25:38.831376    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.831382    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.831388    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.833676    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:38.833692    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.833697    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.833702    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.833706    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.833710    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.833714    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.834111    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210813202419-30853","namespace":"kube-system","uid":"53b6207c-cf99-4cb1-b237-0e69df65538b","resourceVersion":"478","creationTimestamp":"2021-08-13T20:25:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.64:8443","kubernetes.io/config.hash":"914dc216865e390473fe61a3bb624cd9","kubernetes.io/config.mirror":"914dc216865e390473fe61a3bb624cd9","kubernetes.io/config.seen":"2021-08-13T20:25:00.776307664Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 7249 chars]
	I0813 20:25:38.834365    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:38.834376    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.834380    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.834384    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.836187    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:38.836198    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.836202    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.836206    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.836209    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.836212    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.836215    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.836358    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:38.836630    4908 pod_ready.go:92] pod "kube-apiserver-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:25:38.836642    4908 pod_ready.go:81] duration metric: took 5.323998ms waiting for pod "kube-apiserver-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.836650    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.836690    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210813202419-30853
	I0813 20:25:38.836699    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.836704    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.836708    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.838515    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:38.838527    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.838531    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.838534    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.838537    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.838540    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.838543    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.838799    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210813202419-30853","namespace":"kube-system","uid":"f1752bba-a132-4093-8ff3-ad48483d468b","resourceVersion":"475","creationTimestamp":"2021-08-13T20:25:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a2845623a5b448da54677ebde58b73a6","kubernetes.io/config.mirror":"a2845623a5b448da54677ebde58b73a6","kubernetes.io/config.seen":"2021-08-13T20:25:00.776309845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi
g.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.s [truncated 6810 chars]
	I0813 20:25:38.839137    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:38.839153    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.839158    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.839163    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.841254    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:38.841266    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.841269    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.841273    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.841276    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.841279    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.841282    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.841452    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:38.841692    4908 pod_ready.go:92] pod "kube-controller-manager-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:25:38.841708    4908 pod_ready.go:81] duration metric: took 5.049968ms waiting for pod "kube-controller-manager-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.841719    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rb42p" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.841770    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rb42p
	I0813 20:25:38.841782    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.841788    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.841794    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.843333    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:38.843349    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.843356    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.843361    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.843365    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.843370    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.843374    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.843750    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rb42p","generateName":"kube-proxy-","namespace":"kube-system","uid":"5633ede2-5578-4565-97af-b83cf1b25f0d","resourceVersion":"459","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb4b18d1-5cff-490a-b573-900487c4d9e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb4b18d1-5cff-490a-b573-900487c4d9e7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5758 chars]
	I0813 20:25:38.843990    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:38.844001    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.844006    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.844010    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.846335    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:38.846346    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.846352    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.846356    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.846361    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.846365    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.846368    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.846808    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:38.847106    4908 pod_ready.go:92] pod "kube-proxy-rb42p" in "kube-system" namespace has status "Ready":"True"
	I0813 20:25:38.847123    4908 pod_ready.go:81] duration metric: took 5.39711ms waiting for pod "kube-proxy-rb42p" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.847133    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:38.847197    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813202419-30853
	I0813 20:25:38.847209    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:38.847215    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:38.847221    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:38.851869    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:25:38.851877    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:38.851880    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:38 GMT
	I0813 20:25:38.851883    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:38.851886    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:38.851889    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:38.851895    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:38.852320    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210813202419-30853","namespace":"kube-system","uid":"ed906c56-f110-4e49-aa1c-5e0e0b8cb88c","resourceVersion":"384","creationTimestamp":"2021-08-13T20:25:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e846b027c41f0882917076be3af95ba2","kubernetes.io/config.mirror":"e846b027c41f0882917076be3af95ba2","kubernetes.io/config.seen":"2021-08-13T20:25:00.776286387Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:ku
bernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labe [truncated 4540 chars]
	I0813 20:25:39.021810    4908 request.go:600] Waited for 169.226887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:39.021868    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:25:39.021874    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:39.021879    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:39.021884    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:39.067997    4908 round_trippers.go:457] Response Status: 200 OK in 46 milliseconds
	I0813 20:25:39.068023    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:39.068030    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:39.068034    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:39.068039    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:39.068043    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:39 GMT
	I0813 20:25:39.068047    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:39.068186    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:25:39.068459    4908 pod_ready.go:92] pod "kube-scheduler-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:25:39.068472    4908 pod_ready.go:81] duration metric: took 221.329677ms waiting for pod "kube-scheduler-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:39.068483    4908 pod_ready.go:38] duration metric: took 12.788734644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:25:39.068507    4908 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:25:39.068559    4908 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:25:39.080065    4908 command_runner.go:124] > 2597
	I0813 20:25:39.080421    4908 api_server.go:70] duration metric: took 13.140417912s to wait for apiserver process to appear ...
	I0813 20:25:39.080435    4908 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:25:39.080446    4908 api_server.go:239] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0813 20:25:39.087991    4908 api_server.go:265] https://192.168.39.64:8443/healthz returned 200:
	ok
	I0813 20:25:39.088054    4908 round_trippers.go:432] GET https://192.168.39.64:8443/version?timeout=32s
	I0813 20:25:39.088064    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:39.088070    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:39.088084    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:39.089109    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:39.089123    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:39.089128    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:39.089133    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:39.089137    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:39.089142    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:39.089146    4908 round_trippers.go:463]     Content-Length: 263
	I0813 20:25:39.089149    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:39 GMT
	I0813 20:25:39.089270    4908 request.go:1123] Response Body: {
	  "major": "1",
	  "minor": "21",
	  "gitVersion": "v1.21.3",
	  "gitCommit": "ca643a4d1f7bfe34773c74f79527be4afd95bf39",
	  "gitTreeState": "clean",
	  "buildDate": "2021-07-15T20:59:07Z",
	  "goVersion": "go1.16.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0813 20:25:39.089363    4908 api_server.go:139] control plane version: v1.21.3
	I0813 20:25:39.089380    4908 api_server.go:129] duration metric: took 8.93951ms to wait for apiserver health ...
	I0813 20:25:39.089389    4908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:25:39.222007    4908 request.go:600] Waited for 132.539789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods
	I0813 20:25:39.222073    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods
	I0813 20:25:39.222081    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:39.222089    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:39.222131    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:39.229677    4908 round_trippers.go:457] Response Status: 200 OK in 7 milliseconds
	I0813 20:25:39.229698    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:39.229705    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:39.229710    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:39.229713    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:39.229716    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:39 GMT
	I0813 20:25:39.229719    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:39.232359    4908 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"499","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 52849 chars]
	I0813 20:25:39.233519    4908 system_pods.go:59] 8 kube-system pods found
	I0813 20:25:39.233539    4908 system_pods.go:61] "coredns-558bd4d5db-58k2l" [0431b736-8284-40c7-9bc4-fcc968e4c41b] Running
	I0813 20:25:39.233544    4908 system_pods.go:61] "etcd-multinode-20210813202419-30853" [0e8c51de-4800-4c2d-af81-4f4f197d3cd5] Running
	I0813 20:25:39.233547    4908 system_pods.go:61] "kindnet-hc4k2" [8c73e66e-2ec6-4a1b-a7af-3edb2c517f18] Running
	I0813 20:25:39.233551    4908 system_pods.go:61] "kube-apiserver-multinode-20210813202419-30853" [53b6207c-cf99-4cb1-b237-0e69df65538b] Running
	I0813 20:25:39.233555    4908 system_pods.go:61] "kube-controller-manager-multinode-20210813202419-30853" [f1752bba-a132-4093-8ff3-ad48483d468b] Running
	I0813 20:25:39.233561    4908 system_pods.go:61] "kube-proxy-rb42p" [5633ede2-5578-4565-97af-b83cf1b25f0d] Running
	I0813 20:25:39.233564    4908 system_pods.go:61] "kube-scheduler-multinode-20210813202419-30853" [ed906c56-f110-4e49-aa1c-5e0e0b8cb88c] Running
	I0813 20:25:39.233568    4908 system_pods.go:61] "storage-provisioner" [7839155d-5552-45cb-ab31-a243fd82f32e] Running
	I0813 20:25:39.233573    4908 system_pods.go:74] duration metric: took 144.178753ms to wait for pod list to return data ...
	I0813 20:25:39.233588    4908 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:25:39.422006    4908 request.go:600] Waited for 188.350998ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/namespaces/default/serviceaccounts
	I0813 20:25:39.422073    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/default/serviceaccounts
	I0813 20:25:39.422078    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:39.422083    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:39.422093    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:39.425157    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:39.425180    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:39.425187    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:39.425192    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:39.425196    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:39.425199    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:39.425202    4908 round_trippers.go:463]     Content-Length: 304
	I0813 20:25:39.425205    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:39 GMT
	I0813 20:25:39.425226    4908 request.go:1123] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6031096d-6790-4fd8-abfd-6fb4c0c8f61a","resourceVersion":"406","creationTimestamp":"2021-08-13T20:25:25Z"},"secrets":[{"name":"default-token-9blrs"}]}]}
	I0813 20:25:39.425736    4908 default_sa.go:45] found service account: "default"
	I0813 20:25:39.425753    4908 default_sa.go:55] duration metric: took 192.159209ms for default service account to be created ...
	I0813 20:25:39.425761    4908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:25:39.622231    4908 request.go:600] Waited for 196.387039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods
	I0813 20:25:39.622290    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods
	I0813 20:25:39.622297    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:39.622302    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:39.622306    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:39.625739    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:39.625764    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:39.625771    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:39.625775    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:39.625779    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:39.625784    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:39 GMT
	I0813 20:25:39.625788    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:39.626375    4908 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"499","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 52849 chars]
	I0813 20:25:39.628032    4908 system_pods.go:86] 8 kube-system pods found
	I0813 20:25:39.628059    4908 system_pods.go:89] "coredns-558bd4d5db-58k2l" [0431b736-8284-40c7-9bc4-fcc968e4c41b] Running
	I0813 20:25:39.628067    4908 system_pods.go:89] "etcd-multinode-20210813202419-30853" [0e8c51de-4800-4c2d-af81-4f4f197d3cd5] Running
	I0813 20:25:39.628097    4908 system_pods.go:89] "kindnet-hc4k2" [8c73e66e-2ec6-4a1b-a7af-3edb2c517f18] Running
	I0813 20:25:39.628106    4908 system_pods.go:89] "kube-apiserver-multinode-20210813202419-30853" [53b6207c-cf99-4cb1-b237-0e69df65538b] Running
	I0813 20:25:39.628117    4908 system_pods.go:89] "kube-controller-manager-multinode-20210813202419-30853" [f1752bba-a132-4093-8ff3-ad48483d468b] Running
	I0813 20:25:39.628125    4908 system_pods.go:89] "kube-proxy-rb42p" [5633ede2-5578-4565-97af-b83cf1b25f0d] Running
	I0813 20:25:39.628130    4908 system_pods.go:89] "kube-scheduler-multinode-20210813202419-30853" [ed906c56-f110-4e49-aa1c-5e0e0b8cb88c] Running
	I0813 20:25:39.628138    4908 system_pods.go:89] "storage-provisioner" [7839155d-5552-45cb-ab31-a243fd82f32e] Running
	I0813 20:25:39.628151    4908 system_pods.go:126] duration metric: took 202.383679ms to wait for k8s-apps to be running ...
	I0813 20:25:39.628164    4908 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:25:39.628217    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:25:39.639184    4908 system_svc.go:56] duration metric: took 11.015292ms WaitForService to wait for kubelet.
	I0813 20:25:39.639209    4908 kubeadm.go:547] duration metric: took 13.699205758s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:25:39.639228    4908 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:25:39.821632    4908 request.go:600] Waited for 182.336333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/nodes
	I0813 20:25:39.821699    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes
	I0813 20:25:39.821709    4908 round_trippers.go:438] Request Headers:
	I0813 20:25:39.821717    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:39.821729    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:39.825121    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:39.825133    4908 round_trippers.go:460] Response Headers:
	I0813 20:25:39.825139    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:39.825144    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:39.825149    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:25:39.825154    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:25:39.825160    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:39 GMT
	I0813 20:25:39.825313    4908 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed
-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operatio [truncated 6606 chars]
	I0813 20:25:39.826298    4908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:25:39.826327    4908 node_conditions.go:123] node cpu capacity is 2
	I0813 20:25:39.826345    4908 node_conditions.go:105] duration metric: took 187.112573ms to run NodePressure ...
	I0813 20:25:39.826358    4908 start.go:231] waiting for startup goroutines ...
	I0813 20:25:39.828691    4908 out.go:177] 
	I0813 20:25:39.828879    4908 config.go:177] Loaded profile config "multinode-20210813202419-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:25:39.828955    4908 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/config.json ...
	I0813 20:25:39.830971    4908 out.go:177] * Starting node multinode-20210813202419-30853-m02 in cluster multinode-20210813202419-30853
	I0813 20:25:39.830993    4908 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:25:39.831011    4908 cache.go:56] Caching tarball of preloaded images
	I0813 20:25:39.831144    4908 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:25:39.831164    4908 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:25:39.831245    4908 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/config.json ...
	I0813 20:25:39.831383    4908 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:25:39.831407    4908 start.go:313] acquiring machines lock for multinode-20210813202419-30853-m02: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:25:39.831479    4908 start.go:317] acquired machines lock for "multinode-20210813202419-30853-m02" in 57.116µs
	I0813 20:25:39.831500    4908 start.go:89] Provisioning new machine with config: &{Name:multinode-20210813202419-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.21.3 ClusterName:multinode-20210813202419-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.21
.3 ControlPlane:false Worker:true}
	I0813 20:25:39.831569    4908 start.go:126] createHost starting for "m02" (driver="kvm2")
	I0813 20:25:39.833361    4908 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 20:25:39.833442    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:25:39.833475    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:25:39.843918    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35515
	I0813 20:25:39.844325    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:25:39.844771    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:25:39.844809    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:25:39.845137    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:25:39.845319    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetMachineName
	I0813 20:25:39.845475    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:25:39.845608    4908 start.go:160] libmachine.API.Create for "multinode-20210813202419-30853" (driver="kvm2")
	I0813 20:25:39.845639    4908 client.go:168] LocalClient.Create starting
	I0813 20:25:39.845673    4908 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:25:39.845703    4908 main.go:130] libmachine: Decoding PEM data...
	I0813 20:25:39.845724    4908 main.go:130] libmachine: Parsing certificate...
	I0813 20:25:39.845839    4908 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:25:39.845859    4908 main.go:130] libmachine: Decoding PEM data...
	I0813 20:25:39.845870    4908 main.go:130] libmachine: Parsing certificate...
	I0813 20:25:39.845910    4908 main.go:130] libmachine: Running pre-create checks...
	I0813 20:25:39.845919    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .PreCreateCheck
	I0813 20:25:39.846067    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetConfigRaw
	I0813 20:25:39.846445    4908 main.go:130] libmachine: Creating machine...
	I0813 20:25:39.846462    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .Create
	I0813 20:25:39.846581    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Creating KVM machine...
	I0813 20:25:39.849346    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found existing default KVM network
	I0813 20:25:39.849493    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found existing private KVM network mk-multinode-20210813202419-30853
	I0813 20:25:39.849607    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02 ...
	I0813 20:25:39.849636    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 20:25:39.849670    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:39.849568    5183 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:25:39.849755    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 20:25:40.025304    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:40.025182    5183 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa...
	I0813 20:25:40.264706    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:40.264555    5183 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/multinode-20210813202419-30853-m02.rawdisk...
	I0813 20:25:40.264750    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Writing magic tar header
	I0813 20:25:40.264800    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Writing SSH key tar header
	I0813 20:25:40.264821    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:40.264687    5183 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02 ...
	I0813 20:25:40.264842    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02
	I0813 20:25:40.264870    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 20:25:40.264895    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:25:40.264917    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02 (perms=drwx------)
	I0813 20:25:40.264945    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 20:25:40.264965    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 20:25:40.264986    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337
	I0813 20:25:40.265004    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 20:25:40.265024    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 20:25:40.265039    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 20:25:40.265052    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 20:25:40.265071    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Checking permissions on dir: /home/jenkins
	I0813 20:25:40.265085    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Checking permissions on dir: /home
	I0813 20:25:40.265100    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Skipping /home - not owner
	I0813 20:25:40.265144    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Creating domain...
	I0813 20:25:40.289147    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:39:2c:5f in network default
	I0813 20:25:40.289612    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Ensuring networks are active...
	I0813 20:25:40.289635    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:40.291637    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Ensuring network default is active
	I0813 20:25:40.291940    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Ensuring network mk-multinode-20210813202419-30853 is active
	I0813 20:25:40.292296    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Getting domain xml...
	I0813 20:25:40.294048    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Creating domain...
	I0813 20:25:40.681999    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Waiting to get IP...
	I0813 20:25:40.682880    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:40.683407    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:40.683469    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:40.683397    5183 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 20:25:40.947670    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:40.948232    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:40.948258    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:40.948175    5183 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 20:25:41.330684    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:41.331209    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:41.331233    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:41.331176    5183 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 20:25:41.755680    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:41.756133    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:41.756165    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:41.756079    5183 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 20:25:42.230640    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:42.231214    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:42.231249    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:42.231164    5183 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 20:25:42.819789    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:42.820251    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:42.820281    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:42.820194    5183 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 20:25:43.656120    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:43.656648    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:43.656673    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:43.656584    5183 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 20:25:44.404315    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:44.404883    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:44.404913    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:44.404831    5183 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 20:25:45.393153    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:45.393595    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:45.393622    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:45.393561    5183 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 20:25:46.584718    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:46.585115    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:46.585146    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:46.585081    5183 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 20:25:48.264786    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:48.265427    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:48.265461    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:48.265364    5183 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 20:25:50.612895    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:50.613360    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:50.613397    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:50.613291    5183 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 20:25:53.983576    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:53.984023    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find current IP address of domain multinode-20210813202419-30853-m02 in network mk-multinode-20210813202419-30853
	I0813 20:25:53.984055    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | I0813 20:25:53.983947    5183 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0813 20:25:57.105314    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.105792    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Found IP for machine: 192.168.39.3
	I0813 20:25:57.105824    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has current primary IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.105835    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Reserving static IP address...
	I0813 20:25:57.106107    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | unable to find host DHCP lease matching {name: "multinode-20210813202419-30853-m02", mac: "52:54:00:81:96:4b", ip: "192.168.39.3"} in network mk-multinode-20210813202419-30853
	I0813 20:25:57.152398    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Reserved static IP address: 192.168.39.3
	I0813 20:25:57.152442    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Getting to WaitForSSH function...
	I0813 20:25:57.152453    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Waiting for SSH to be available...
	I0813 20:25:57.157596    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.157925    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:57.157959    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.158116    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Using SSH client type: external
	I0813 20:25:57.158147    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa (-rw-------)
	I0813 20:25:57.158177    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 20:25:57.158194    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | About to run SSH command:
	I0813 20:25:57.158210    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | exit 0
	I0813 20:25:57.290436    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | SSH cmd err, output: <nil>: 
	I0813 20:25:57.291344    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) KVM machine creation complete!
	I0813 20:25:57.291404    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetConfigRaw
	I0813 20:25:57.291919    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:25:57.292092    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:25:57.292219    4908 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 20:25:57.292238    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetState
	I0813 20:25:57.295055    4908 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 20:25:57.295076    4908 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 20:25:57.295086    4908 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 20:25:57.295098    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:57.299585    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.299910    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:57.299936    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.300125    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:57.300286    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.300447    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.300599    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:57.300762    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:25:57.300929    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0813 20:25:57.300942    4908 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 20:25:57.422080    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:25:57.422111    4908 main.go:130] libmachine: Detecting the provisioner...
	I0813 20:25:57.422123    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:57.427209    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.427540    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:57.427565    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.427692    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:57.427853    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.427980    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.428076    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:57.428181    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:25:57.428340    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0813 20:25:57.428354    4908 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 20:25:57.547277    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 20:25:57.547355    4908 main.go:130] libmachine: found compatible host: buildroot
	I0813 20:25:57.547370    4908 main.go:130] libmachine: Provisioning with buildroot...
	I0813 20:25:57.547384    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetMachineName
	I0813 20:25:57.547604    4908 buildroot.go:166] provisioning hostname "multinode-20210813202419-30853-m02"
	I0813 20:25:57.547637    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetMachineName
	I0813 20:25:57.547790    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:57.553068    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.553381    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:57.553418    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.553533    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:57.553721    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.553879    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.554012    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:57.554216    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:25:57.554393    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0813 20:25:57.554413    4908 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210813202419-30853-m02 && echo "multinode-20210813202419-30853-m02" | sudo tee /etc/hostname
	I0813 20:25:57.683726    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210813202419-30853-m02
	
	I0813 20:25:57.683753    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:57.688482    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.688770    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:57.688803    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.688904    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:57.689071    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.689236    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:57.689373    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:57.689514    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:25:57.689641    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0813 20:25:57.689662    4908 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210813202419-30853-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210813202419-30853-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210813202419-30853-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:25:57.816877    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:25:57.816914    4908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:25:57.816929    4908 buildroot.go:174] setting up certificates
	I0813 20:25:57.816939    4908 provision.go:83] configureAuth start
	I0813 20:25:57.816948    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetMachineName
	I0813 20:25:57.817213    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetIP
	I0813 20:25:57.821850    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.822207    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:57.822237    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.822359    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:57.826610    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.826921    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:57.826951    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:57.827064    4908 provision.go:138] copyHostCerts
	I0813 20:25:57.827101    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:25:57.827137    4908 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:25:57.827150    4908 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:25:57.827218    4908 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:25:57.827291    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:25:57.827317    4908 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:25:57.827329    4908 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:25:57.827358    4908 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:25:57.827403    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:25:57.827426    4908 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:25:57.827435    4908 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:25:57.827462    4908 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:25:57.827560    4908 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.multinode-20210813202419-30853-m02 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube multinode-20210813202419-30853-m02]
	I0813 20:25:58.099551    4908 provision.go:172] copyRemoteCerts
	I0813 20:25:58.099620    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:25:58.099652    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:58.104572    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.104921    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:58.104949    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.105064    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:58.105256    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:58.105413    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:58.105530    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa Username:docker}
	I0813 20:25:58.194743    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0813 20:25:58.194808    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:25:58.211458    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0813 20:25:58.211510    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:25:58.226802    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0813 20:25:58.226839    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0813 20:25:58.242520    4908 provision.go:86] duration metric: configureAuth took 425.571202ms
	I0813 20:25:58.242541    4908 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:25:58.242711    4908 config.go:177] Loaded profile config "multinode-20210813202419-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:25:58.242820    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:58.248090    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.248396    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:58.248426    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.248532    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:58.248715    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:58.248848    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:58.248975    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:58.249096    4908 main.go:130] libmachine: Using SSH client type: native
	I0813 20:25:58.249245    4908 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0813 20:25:58.249263    4908 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:25:58.962979    4908 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:25:58.963017    4908 main.go:130] libmachine: Checking connection to Docker...
	I0813 20:25:58.963028    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetURL
	I0813 20:25:58.965664    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | Using libvirt version 3000000
	I0813 20:25:58.969912    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.970233    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:58.970258    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.970366    4908 main.go:130] libmachine: Docker is up and running!
	I0813 20:25:58.970378    4908 main.go:130] libmachine: Reticulating splines...
	I0813 20:25:58.970388    4908 client.go:171] LocalClient.Create took 19.124740854s
	I0813 20:25:58.970410    4908 start.go:168] duration metric: libmachine.API.Create for "multinode-20210813202419-30853" took 19.124802703s
	I0813 20:25:58.970423    4908 start.go:267] post-start starting for "multinode-20210813202419-30853-m02" (driver="kvm2")
	I0813 20:25:58.970430    4908 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:25:58.970454    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:25:58.970693    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:25:58.970721    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:58.974796    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.975134    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:58.975164    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:58.975303    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:58.975472    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:58.975612    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:58.975729    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa Username:docker}
	I0813 20:25:59.062393    4908 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:25:59.066339    4908 command_runner.go:124] > NAME=Buildroot
	I0813 20:25:59.066361    4908 command_runner.go:124] > VERSION=2020.02.12
	I0813 20:25:59.066367    4908 command_runner.go:124] > ID=buildroot
	I0813 20:25:59.066373    4908 command_runner.go:124] > VERSION_ID=2020.02.12
	I0813 20:25:59.066378    4908 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0813 20:25:59.066740    4908 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:25:59.066759    4908 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:25:59.066809    4908 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:25:59.066929    4908 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 20:25:59.066945    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> /etc/ssl/certs/308532.pem
	I0813 20:25:59.067049    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:25:59.073037    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:25:59.089115    4908 start.go:270] post-start completed in 118.679631ms
	I0813 20:25:59.089164    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetConfigRaw
	I0813 20:25:59.089745    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetIP
	I0813 20:25:59.094392    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.094702    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:59.094735    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.094918    4908 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/config.json ...
	I0813 20:25:59.095077    4908 start.go:129] duration metric: createHost completed in 19.263499043s
	I0813 20:25:59.095089    4908 start.go:80] releasing machines lock for "multinode-20210813202419-30853-m02", held for 19.263600689s
	I0813 20:25:59.095126    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:25:59.095289    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetIP
	I0813 20:25:59.099345    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.099649    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:59.099684    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.101566    4908 out.go:177] * Found network options:
	I0813 20:25:59.102839    4908 out.go:177]   - NO_PROXY=192.168.39.64
	W0813 20:25:59.102884    4908 proxy.go:118] fail to check proxy env: Error ip not in block
	I0813 20:25:59.102919    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:25:59.103064    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:25:59.103527    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	W0813 20:25:59.103712    4908 proxy.go:118] fail to check proxy env: Error ip not in block
	I0813 20:25:59.103760    4908 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:25:59.103831    4908 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:25:59.103876    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:59.103835    4908 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:25:59.103934    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:25:59.108508    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.108856    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:59.108881    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.109047    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:59.109198    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:59.109322    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:59.109470    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa Username:docker}
	I0813 20:25:59.110249    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.112226    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:25:59.112259    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:25:59.112419    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:25:59.112582    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:25:59.112711    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:25:59.112839    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa Username:docker}
	I0813 20:25:59.213124    4908 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0813 20:25:59.213152    4908 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0813 20:25:59.213160    4908 command_runner.go:124] > <H1>302 Moved</H1>
	I0813 20:25:59.213167    4908 command_runner.go:124] > The document has moved
	I0813 20:25:59.213176    4908 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0813 20:25:59.213182    4908 command_runner.go:124] > </BODY></HTML>
	I0813 20:26:03.210204    4908 command_runner.go:124] > {
	I0813 20:26:03.210227    4908 command_runner.go:124] >   "images": [
	I0813 20:26:03.210231    4908 command_runner.go:124] >   ]
	I0813 20:26:03.210235    4908 command_runner.go:124] > }
	I0813 20:26:03.211432    4908 command_runner.go:124] ! time="2021-08-13T20:25:59Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0813 20:26:03.211468    4908 command_runner.go:124] ! time="2021-08-13T20:26:01Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 20:26:03.211488    4908 command_runner.go:124] ! time="2021-08-13T20:26:03Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 20:26:03.211506    4908 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.107577771s)
	I0813 20:26:03.211535    4908 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 20:26:03.211569    4908 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (4.107709659s)
	I0813 20:26:03.211579    4908 ssh_runner.go:149] Run: which lz4
	I0813 20:26:03.216118    4908 command_runner.go:124] > /bin/lz4
	I0813 20:26:03.216190    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0813 20:26:03.216272    4908 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 20:26:03.220614    4908 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:26:03.221309    4908 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:26:03.221336    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0813 20:26:05.496670    4908 crio.go:362] Took 2.280427 seconds to copy over tarball
	I0813 20:26:05.496741    4908 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 20:26:10.991280    4908 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.494512409s)
	I0813 20:26:11.395068    4908 crio.go:369] Took 5.898357 seconds t extract the tarball
	I0813 20:26:11.395087    4908 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 20:26:11.444796    4908 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:26:11.459921    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:26:11.470643    4908 docker.go:153] disabling docker service ...
	I0813 20:26:11.470696    4908 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:26:11.482481    4908 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:26:11.492174    4908 command_runner.go:124] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0813 20:26:11.492242    4908 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:26:11.633999    4908 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0813 20:26:11.634091    4908 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:26:11.645422    4908 command_runner.go:124] ! Unit docker.service does not exist, proceeding anyway.
	I0813 20:26:11.645910    4908 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0813 20:26:11.774927    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:26:11.785306    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:26:11.797355    4908 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0813 20:26:11.797379    4908 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0813 20:26:11.797879    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:26:11.805228    4908 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:26:11.805252    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:26:11.813837    4908 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:26:11.820121    4908 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:26:11.820640    4908 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:26:11.820690    4908 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:26:11.834893    4908 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:26:11.841588    4908 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:26:11.971587    4908 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:26:12.121767    4908 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:26:12.121837    4908 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:26:12.127823    4908 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0813 20:26:12.127849    4908 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0813 20:26:12.127859    4908 command_runner.go:124] > Device: 14h/20d	Inode: 30135       Links: 1
	I0813 20:26:12.127869    4908 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 20:26:12.127876    4908 command_runner.go:124] > Access: 2021-08-13 20:26:03.171125855 +0000
	I0813 20:26:12.127886    4908 command_runner.go:124] > Modify: 2021-08-13 20:25:58.879529405 +0000
	I0813 20:26:12.127895    4908 command_runner.go:124] > Change: 2021-08-13 20:25:58.879529405 +0000
	I0813 20:26:12.127902    4908 command_runner.go:124] >  Birth: -
	I0813 20:26:12.128097    4908 start.go:413] Will wait 60s for crictl version
	I0813 20:26:12.128150    4908 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:26:12.159356    4908 command_runner.go:124] > Version:  0.1.0
	I0813 20:26:12.159380    4908 command_runner.go:124] > RuntimeName:  cri-o
	I0813 20:26:12.159385    4908 command_runner.go:124] > RuntimeVersion:  1.20.2
	I0813 20:26:12.159394    4908 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0813 20:26:12.159414    4908 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 20:26:12.159501    4908 ssh_runner.go:149] Run: crio --version
	I0813 20:26:12.450092    4908 command_runner.go:124] ! time="2021-08-13T20:26:12Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 20:26:12.452148    4908 command_runner.go:124] > crio version 1.20.2
	I0813 20:26:12.452170    4908 command_runner.go:124] > Version:       1.20.2
	I0813 20:26:12.452178    4908 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 20:26:12.452182    4908 command_runner.go:124] > GitTreeState:  clean
	I0813 20:26:12.452191    4908 command_runner.go:124] > BuildDate:     2021-08-10T19:57:38Z
	I0813 20:26:12.452195    4908 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 20:26:12.452199    4908 command_runner.go:124] > Compiler:      gc
	I0813 20:26:12.452204    4908 command_runner.go:124] > Platform:      linux/amd64
	I0813 20:26:12.452269    4908 ssh_runner.go:149] Run: crio --version
	I0813 20:26:12.732826    4908 command_runner.go:124] ! time="2021-08-13T20:26:12Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 20:26:12.734818    4908 command_runner.go:124] > crio version 1.20.2
	I0813 20:26:12.734835    4908 command_runner.go:124] > Version:       1.20.2
	I0813 20:26:12.734842    4908 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 20:26:12.734847    4908 command_runner.go:124] > GitTreeState:  clean
	I0813 20:26:12.734868    4908 command_runner.go:124] > BuildDate:     2021-08-10T19:57:38Z
	I0813 20:26:12.734877    4908 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 20:26:12.734881    4908 command_runner.go:124] > Compiler:      gc
	I0813 20:26:12.734886    4908 command_runner.go:124] > Platform:      linux/amd64
	I0813 20:26:14.345675    4908 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 20:26:14.627119    4908 out.go:177]   - env NO_PROXY=192.168.39.64
	I0813 20:26:14.627232    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetIP
	I0813 20:26:14.633413    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:26:14.633777    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:26:14.633815    4908 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:26:14.633978    4908 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 20:26:14.639555    4908 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:26:14.651171    4908 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853 for IP: 192.168.39.3
	I0813 20:26:14.651227    4908 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:26:14.651249    4908 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:26:14.651266    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0813 20:26:14.651285    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0813 20:26:14.651300    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0813 20:26:14.651319    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0813 20:26:14.651394    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 20:26:14.651442    4908 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 20:26:14.651461    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:26:14.651499    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:26:14.651535    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:26:14.651577    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:26:14.651640    4908 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:26:14.651679    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:14.651699    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem -> /usr/share/ca-certificates/30853.pem
	I0813 20:26:14.651715    4908 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> /usr/share/ca-certificates/308532.pem
	I0813 20:26:14.652111    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:26:14.672172    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:26:14.691853    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:26:14.709832    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:26:14.726756    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:26:14.742902    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 20:26:14.759718    4908 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 20:26:14.776474    4908 ssh_runner.go:149] Run: openssl version
	I0813 20:26:14.782381    4908 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0813 20:26:14.782440    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:26:14.789733    4908 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:14.794316    4908 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:14.794486    4908 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:14.794531    4908 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:14.800232    4908 command_runner.go:124] > b5213941
	I0813 20:26:14.800292    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:26:14.808971    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 20:26:14.817238    4908 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 20:26:14.821902    4908 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:26:14.821929    4908 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:26:14.821963    4908 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 20:26:14.827540    4908 command_runner.go:124] > 51391683
	I0813 20:26:14.827793    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 20:26:14.835963    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 20:26:14.844187    4908 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 20:26:14.848938    4908 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:26:14.848963    4908 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:26:14.848995    4908 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 20:26:14.854773    4908 command_runner.go:124] > 3ec20f2e
	I0813 20:26:14.855261    4908 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:26:14.863403    4908 ssh_runner.go:149] Run: crio config
	I0813 20:26:15.120564    4908 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0813 20:26:15.120604    4908 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0813 20:26:15.120614    4908 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0813 20:26:15.120619    4908 command_runner.go:124] > #
	I0813 20:26:15.120630    4908 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0813 20:26:15.120652    4908 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0813 20:26:15.120679    4908 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0813 20:26:15.120695    4908 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0813 20:26:15.120703    4908 command_runner.go:124] > # reload'.
	I0813 20:26:15.120714    4908 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0813 20:26:15.120727    4908 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0813 20:26:15.120740    4908 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0813 20:26:15.120751    4908 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0813 20:26:15.120757    4908 command_runner.go:124] > [crio]
	I0813 20:26:15.120768    4908 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0813 20:26:15.120778    4908 command_runner.go:124] > # containers images, in this directory.
	I0813 20:26:15.120787    4908 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0813 20:26:15.120802    4908 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0813 20:26:15.120813    4908 command_runner.go:124] > #runroot = "/var/run/containers/storage"
	I0813 20:26:15.120825    4908 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0813 20:26:15.120836    4908 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0813 20:26:15.120844    4908 command_runner.go:124] > #storage_driver = "overlay"
	I0813 20:26:15.120854    4908 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0813 20:26:15.120866    4908 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0813 20:26:15.120873    4908 command_runner.go:124] > #storage_option = [
	I0813 20:26:15.120878    4908 command_runner.go:124] > #]
	I0813 20:26:15.120889    4908 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0813 20:26:15.120901    4908 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0813 20:26:15.120910    4908 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0813 20:26:15.120923    4908 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0813 20:26:15.120936    4908 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0813 20:26:15.120944    4908 command_runner.go:124] > # always happen on a node reboot
	I0813 20:26:15.120980    4908 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0813 20:26:15.120992    4908 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0813 20:26:15.121002    4908 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0813 20:26:15.121013    4908 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0813 20:26:15.121026    4908 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0813 20:26:15.121040    4908 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0813 20:26:15.121049    4908 command_runner.go:124] > [crio.api]
	I0813 20:26:15.121058    4908 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0813 20:26:15.121066    4908 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0813 20:26:15.121076    4908 command_runner.go:124] > # IP address on which the stream server will listen.
	I0813 20:26:15.121083    4908 command_runner.go:124] > stream_address = "127.0.0.1"
	I0813 20:26:15.121095    4908 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0813 20:26:15.121106    4908 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0813 20:26:15.121113    4908 command_runner.go:124] > stream_port = "0"
	I0813 20:26:15.121122    4908 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0813 20:26:15.121131    4908 command_runner.go:124] > stream_enable_tls = false
	I0813 20:26:15.121141    4908 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0813 20:26:15.121150    4908 command_runner.go:124] > stream_idle_timeout = ""
	I0813 20:26:15.121161    4908 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0813 20:26:15.121174    4908 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0813 20:26:15.121180    4908 command_runner.go:124] > # minutes.
	I0813 20:26:15.121185    4908 command_runner.go:124] > stream_tls_cert = ""
	I0813 20:26:15.121194    4908 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0813 20:26:15.121205    4908 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0813 20:26:15.121211    4908 command_runner.go:124] > stream_tls_key = ""
	I0813 20:26:15.121222    4908 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0813 20:26:15.121235    4908 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0813 20:26:15.121244    4908 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0813 20:26:15.121250    4908 command_runner.go:124] > stream_tls_ca = ""
	I0813 20:26:15.121269    4908 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 20:26:15.121276    4908 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0813 20:26:15.121287    4908 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 20:26:15.121294    4908 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0813 20:26:15.121305    4908 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0813 20:26:15.121319    4908 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0813 20:26:15.121325    4908 command_runner.go:124] > [crio.runtime]
	I0813 20:26:15.121335    4908 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0813 20:26:15.121344    4908 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0813 20:26:15.121353    4908 command_runner.go:124] > # "nofile=1024:2048"
	I0813 20:26:15.121363    4908 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0813 20:26:15.121370    4908 command_runner.go:124] > #default_ulimits = [
	I0813 20:26:15.121376    4908 command_runner.go:124] > #]
	I0813 20:26:15.121386    4908 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0813 20:26:15.121393    4908 command_runner.go:124] > no_pivot = false
	I0813 20:26:15.121403    4908 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0813 20:26:15.121441    4908 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0813 20:26:15.121452    4908 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0813 20:26:15.121462    4908 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0813 20:26:15.121473    4908 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0813 20:26:15.121480    4908 command_runner.go:124] > conmon = "/usr/libexec/crio/conmon"
	I0813 20:26:15.121488    4908 command_runner.go:124] > # Cgroup setting for conmon
	I0813 20:26:15.121495    4908 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0813 20:26:15.121504    4908 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0813 20:26:15.121512    4908 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0813 20:26:15.121523    4908 command_runner.go:124] > conmon_env = [
	I0813 20:26:15.121529    4908 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0813 20:26:15.121533    4908 command_runner.go:124] > ]
	I0813 20:26:15.121538    4908 command_runner.go:124] > # Additional environment variables to set for all the
	I0813 20:26:15.121545    4908 command_runner.go:124] > # containers. These are overridden if set in the
	I0813 20:26:15.121551    4908 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0813 20:26:15.121555    4908 command_runner.go:124] > default_env = [
	I0813 20:26:15.121558    4908 command_runner.go:124] > ]
	I0813 20:26:15.121564    4908 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0813 20:26:15.121569    4908 command_runner.go:124] > selinux = false
	I0813 20:26:15.121577    4908 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0813 20:26:15.121585    4908 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0813 20:26:15.121591    4908 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0813 20:26:15.121595    4908 command_runner.go:124] > seccomp_profile = ""
	I0813 20:26:15.121600    4908 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0813 20:26:15.121608    4908 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0813 20:26:15.121614    4908 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0813 20:26:15.121619    4908 command_runner.go:124] > # which might increase security.
	I0813 20:26:15.121624    4908 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0813 20:26:15.121631    4908 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0813 20:26:15.121638    4908 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0813 20:26:15.121645    4908 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0813 20:26:15.121651    4908 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0813 20:26:15.121657    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:26:15.121661    4908 command_runner.go:124] > apparmor_profile = "crio-default"
	I0813 20:26:15.121668    4908 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0813 20:26:15.121673    4908 command_runner.go:124] > # irqbalance daemon.
	I0813 20:26:15.121678    4908 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0813 20:26:15.121684    4908 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0813 20:26:15.121690    4908 command_runner.go:124] > cgroup_manager = "systemd"
	I0813 20:26:15.121696    4908 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0813 20:26:15.121701    4908 command_runner.go:124] > separate_pull_cgroup = ""
	I0813 20:26:15.121709    4908 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0813 20:26:15.121716    4908 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0813 20:26:15.121719    4908 command_runner.go:124] > # will be added.
	I0813 20:26:15.121724    4908 command_runner.go:124] > default_capabilities = [
	I0813 20:26:15.121729    4908 command_runner.go:124] > 	"CHOWN",
	I0813 20:26:15.121732    4908 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0813 20:26:15.121736    4908 command_runner.go:124] > 	"FSETID",
	I0813 20:26:15.121739    4908 command_runner.go:124] > 	"FOWNER",
	I0813 20:26:15.121743    4908 command_runner.go:124] > 	"SETGID",
	I0813 20:26:15.121746    4908 command_runner.go:124] > 	"SETUID",
	I0813 20:26:15.121752    4908 command_runner.go:124] > 	"SETPCAP",
	I0813 20:26:15.121757    4908 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0813 20:26:15.121760    4908 command_runner.go:124] > 	"KILL",
	I0813 20:26:15.121763    4908 command_runner.go:124] > ]
	I0813 20:26:15.121769    4908 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0813 20:26:15.121776    4908 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 20:26:15.121781    4908 command_runner.go:124] > default_sysctls = [
	I0813 20:26:15.121784    4908 command_runner.go:124] > ]
	I0813 20:26:15.121790    4908 command_runner.go:124] > # List of additional devices. specified as
	I0813 20:26:15.121798    4908 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0813 20:26:15.121804    4908 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0813 20:26:15.121809    4908 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 20:26:15.121814    4908 command_runner.go:124] > additional_devices = [
	I0813 20:26:15.121817    4908 command_runner.go:124] > ]
	I0813 20:26:15.121823    4908 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0813 20:26:15.121830    4908 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0813 20:26:15.121834    4908 command_runner.go:124] > hooks_dir = [
	I0813 20:26:15.121839    4908 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0813 20:26:15.121842    4908 command_runner.go:124] > ]
	I0813 20:26:15.121848    4908 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0813 20:26:15.121855    4908 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0813 20:26:15.121860    4908 command_runner.go:124] > # its default mounts from the following two files:
	I0813 20:26:15.121865    4908 command_runner.go:124] > #
	I0813 20:26:15.121871    4908 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0813 20:26:15.121878    4908 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0813 20:26:15.121884    4908 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0813 20:26:15.121888    4908 command_runner.go:124] > #
	I0813 20:26:15.121894    4908 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0813 20:26:15.121901    4908 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0813 20:26:15.121908    4908 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0813 20:26:15.121914    4908 command_runner.go:124] > #      only add mounts it finds in this file.
	I0813 20:26:15.121918    4908 command_runner.go:124] > #
	I0813 20:26:15.121922    4908 command_runner.go:124] > #default_mounts_file = ""
	I0813 20:26:15.121927    4908 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0813 20:26:15.121933    4908 command_runner.go:124] > pids_limit = 1024
	I0813 20:26:15.121939    4908 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0813 20:26:15.121945    4908 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0813 20:26:15.121952    4908 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0813 20:26:15.121974    4908 command_runner.go:124] > # limit is never exceeded.
	I0813 20:26:15.121984    4908 command_runner.go:124] > log_size_max = -1
	I0813 20:26:15.122064    4908 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0813 20:26:15.122074    4908 command_runner.go:124] > log_to_journald = false
	I0813 20:26:15.122081    4908 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0813 20:26:15.122085    4908 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0813 20:26:15.122091    4908 command_runner.go:124] > # Path to directory for container attach sockets.
	I0813 20:26:15.122099    4908 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0813 20:26:15.122104    4908 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0813 20:26:15.122108    4908 command_runner.go:124] > bind_mount_prefix = ""
	I0813 20:26:15.122114    4908 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0813 20:26:15.122119    4908 command_runner.go:124] > read_only = false
	I0813 20:26:15.122125    4908 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0813 20:26:15.122132    4908 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0813 20:26:15.122137    4908 command_runner.go:124] > # live configuration reload.
	I0813 20:26:15.122141    4908 command_runner.go:124] > log_level = "info"
	I0813 20:26:15.122147    4908 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0813 20:26:15.122153    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:26:15.122157    4908 command_runner.go:124] > log_filter = ""
	I0813 20:26:15.122165    4908 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0813 20:26:15.122172    4908 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0813 20:26:15.122176    4908 command_runner.go:124] > # separated by comma.
	I0813 20:26:15.122182    4908 command_runner.go:124] > uid_mappings = ""
	I0813 20:26:15.122188    4908 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0813 20:26:15.122194    4908 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0813 20:26:15.122198    4908 command_runner.go:124] > # separated by comma.
	I0813 20:26:15.122202    4908 command_runner.go:124] > gid_mappings = ""
	I0813 20:26:15.122208    4908 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0813 20:26:15.122216    4908 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0813 20:26:15.122221    4908 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0813 20:26:15.122229    4908 command_runner.go:124] > ctr_stop_timeout = 30
	I0813 20:26:15.122238    4908 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0813 20:26:15.122247    4908 command_runner.go:124] > # and manage their lifecycle.
	I0813 20:26:15.122257    4908 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0813 20:26:15.122266    4908 command_runner.go:124] > manage_ns_lifecycle = true
	I0813 20:26:15.122272    4908 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0813 20:26:15.122279    4908 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0813 20:26:15.122284    4908 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0813 20:26:15.122290    4908 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0813 20:26:15.122294    4908 command_runner.go:124] > drop_infra_ctr = false
	I0813 20:26:15.122301    4908 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0813 20:26:15.122307    4908 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0813 20:26:15.122315    4908 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0813 20:26:15.122322    4908 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0813 20:26:15.122328    4908 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0813 20:26:15.122333    4908 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0813 20:26:15.122339    4908 command_runner.go:124] > namespaces_dir = "/var/run"
	I0813 20:26:15.122346    4908 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0813 20:26:15.122351    4908 command_runner.go:124] > pinns_path = "/usr/bin/pinns"
	I0813 20:26:15.122357    4908 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0813 20:26:15.122364    4908 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0813 20:26:15.122371    4908 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0813 20:26:15.122375    4908 command_runner.go:124] > default_runtime = "runc"
	I0813 20:26:15.122381    4908 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0813 20:26:15.122389    4908 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0813 20:26:15.122396    4908 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0813 20:26:15.122403    4908 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0813 20:26:15.122406    4908 command_runner.go:124] > #
	I0813 20:26:15.122411    4908 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0813 20:26:15.122418    4908 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0813 20:26:15.122421    4908 command_runner.go:124] > #  runtime_type = "oci"
	I0813 20:26:15.122426    4908 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0813 20:26:15.122431    4908 command_runner.go:124] > #  privileged_without_host_devices = false
	I0813 20:26:15.122435    4908 command_runner.go:124] > #  allowed_annotations = []
	I0813 20:26:15.122438    4908 command_runner.go:124] > # Where:
	I0813 20:26:15.122444    4908 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0813 20:26:15.122452    4908 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0813 20:26:15.122458    4908 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0813 20:26:15.122466    4908 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0813 20:26:15.122470    4908 command_runner.go:124] > #   in $PATH.
	I0813 20:26:15.122476    4908 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0813 20:26:15.122482    4908 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0813 20:26:15.122488    4908 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0813 20:26:15.122492    4908 command_runner.go:124] > #   state.
	I0813 20:26:15.122498    4908 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0813 20:26:15.122504    4908 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0813 20:26:15.122511    4908 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0813 20:26:15.122545    4908 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0813 20:26:15.122556    4908 command_runner.go:124] > #   The currently recognized values are:
	I0813 20:26:15.122573    4908 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0813 20:26:15.122582    4908 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0813 20:26:15.122589    4908 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0813 20:26:15.122594    4908 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0813 20:26:15.122599    4908 command_runner.go:124] > runtime_path = "/usr/bin/runc"
	I0813 20:26:15.122603    4908 command_runner.go:124] > runtime_type = "oci"
	I0813 20:26:15.122607    4908 command_runner.go:124] > runtime_root = "/run/runc"
	I0813 20:26:15.122614    4908 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0813 20:26:15.122619    4908 command_runner.go:124] > # running containers
	I0813 20:26:15.122623    4908 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0813 20:26:15.122630    4908 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0813 20:26:15.122637    4908 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0813 20:26:15.122643    4908 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0813 20:26:15.122650    4908 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0813 20:26:15.122654    4908 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0813 20:26:15.122659    4908 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0813 20:26:15.122663    4908 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0813 20:26:15.122668    4908 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0813 20:26:15.122674    4908 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0813 20:26:15.122681    4908 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0813 20:26:15.122684    4908 command_runner.go:124] > #
	I0813 20:26:15.122690    4908 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0813 20:26:15.122697    4908 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0813 20:26:15.122703    4908 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0813 20:26:15.122711    4908 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0813 20:26:15.122717    4908 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0813 20:26:15.122721    4908 command_runner.go:124] > [crio.image]
	I0813 20:26:15.122727    4908 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0813 20:26:15.122732    4908 command_runner.go:124] > default_transport = "docker://"
	I0813 20:26:15.122738    4908 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0813 20:26:15.122745    4908 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0813 20:26:15.122749    4908 command_runner.go:124] > global_auth_file = ""
	I0813 20:26:15.122754    4908 command_runner.go:124] > # The image used to instantiate infra containers.
	I0813 20:26:15.122760    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:26:15.122765    4908 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0813 20:26:15.122771    4908 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0813 20:26:15.122778    4908 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0813 20:26:15.122783    4908 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:26:15.122788    4908 command_runner.go:124] > pause_image_auth_file = ""
	I0813 20:26:15.122801    4908 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0813 20:26:15.122810    4908 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0813 20:26:15.122818    4908 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0813 20:26:15.122825    4908 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0813 20:26:15.122829    4908 command_runner.go:124] > pause_command = "/pause"
	I0813 20:26:15.122836    4908 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0813 20:26:15.122843    4908 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0813 20:26:15.122862    4908 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0813 20:26:15.122872    4908 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0813 20:26:15.122877    4908 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0813 20:26:15.122881    4908 command_runner.go:124] > signature_policy = ""
	I0813 20:26:15.122888    4908 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0813 20:26:15.122899    4908 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0813 20:26:15.122903    4908 command_runner.go:124] > # changing them here.
	I0813 20:26:15.122907    4908 command_runner.go:124] > #insecure_registries = "[]"
	I0813 20:26:15.122914    4908 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0813 20:26:15.122920    4908 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0813 20:26:15.122924    4908 command_runner.go:124] > image_volumes = "mkdir"
	I0813 20:26:15.122930    4908 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0813 20:26:15.122937    4908 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0813 20:26:15.122943    4908 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0813 20:26:15.122950    4908 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0813 20:26:15.122954    4908 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0813 20:26:15.122958    4908 command_runner.go:124] > #registries = [
	I0813 20:26:15.122962    4908 command_runner.go:124] > # 	"docker.io",
	I0813 20:26:15.122965    4908 command_runner.go:124] > #]
	I0813 20:26:15.122970    4908 command_runner.go:124] > # Temporary directory to use for storing big files
	I0813 20:26:15.122975    4908 command_runner.go:124] > big_files_temporary_dir = ""
	I0813 20:26:15.122981    4908 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0813 20:26:15.122985    4908 command_runner.go:124] > # CNI plugins.
	I0813 20:26:15.122991    4908 command_runner.go:124] > [crio.network]
	I0813 20:26:15.122997    4908 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0813 20:26:15.123002    4908 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0813 20:26:15.123007    4908 command_runner.go:124] > # cni_default_network = "kindnet"
	I0813 20:26:15.123012    4908 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0813 20:26:15.123018    4908 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0813 20:26:15.123024    4908 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0813 20:26:15.123044    4908 command_runner.go:124] > plugin_dirs = [
	I0813 20:26:15.123054    4908 command_runner.go:124] > 	"/opt/cni/bin/",
	I0813 20:26:15.123067    4908 command_runner.go:124] > ]
	I0813 20:26:15.123077    4908 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0813 20:26:15.123081    4908 command_runner.go:124] > [crio.metrics]
	I0813 20:26:15.123088    4908 command_runner.go:124] > # Globally enable or disable metrics support.
	I0813 20:26:15.123092    4908 command_runner.go:124] > enable_metrics = true
	I0813 20:26:15.123099    4908 command_runner.go:124] > # The port on which the metrics server will listen.
	I0813 20:26:15.123106    4908 command_runner.go:124] > metrics_port = 9090
	I0813 20:26:15.123150    4908 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0813 20:26:15.123161    4908 command_runner.go:124] > metrics_socket = ""
	I0813 20:26:15.123227    4908 command_runner.go:124] ! time="2021-08-13T20:26:15Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 20:26:15.123252    4908 command_runner.go:124] ! time="2021-08-13T20:26:15Z" level=warning msg="The 'registries' option in crio.conf(5) (referenced in \"/etc/crio/crio.conf\") has been deprecated and will be removed with CRI-O 1.21."
	I0813 20:26:15.123265    4908 command_runner.go:124] ! time="2021-08-13T20:26:15Z" level=warning msg="Please refer to containers-registries.conf(5) for configuring unqualified-search registries."
	I0813 20:26:15.123291    4908 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0813 20:26:15.123417    4908 cni.go:93] Creating CNI manager for ""
	I0813 20:26:15.123433    4908 cni.go:154] 2 nodes found, recommending kindnet
	I0813 20:26:15.123443    4908 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:26:15.123461    4908 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210813202419-30853 NodeName:multinode-20210813202419-30853-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.3 CgroupDriver:systemd ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:26:15.123626    4908 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210813202419-30853-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:26:15.123708    4908 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=multinode-20210813202419-30853-m02 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813202419-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:26:15.123770    4908 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:26:15.130951    4908 command_runner.go:124] > kubeadm
	I0813 20:26:15.130972    4908 command_runner.go:124] > kubectl
	I0813 20:26:15.130977    4908 command_runner.go:124] > kubelet
	I0813 20:26:15.130992    4908 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:26:15.131033    4908 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0813 20:26:15.138298    4908 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (513 bytes)
	I0813 20:26:15.151957    4908 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:26:15.163904    4908 ssh_runner.go:149] Run: grep 192.168.39.64	control-plane.minikube.internal$ /etc/hosts
	I0813 20:26:15.168042    4908 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:26:15.178271    4908 host.go:66] Checking if "multinode-20210813202419-30853" exists ...
	I0813 20:26:15.178533    4908 config.go:177] Loaded profile config "multinode-20210813202419-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:26:15.178717    4908 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:26:15.178763    4908 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:26:15.189761    4908 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46303
	I0813 20:26:15.190179    4908 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:26:15.190660    4908 main.go:130] libmachine: Using API Version  1
	I0813 20:26:15.190681    4908 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:26:15.191029    4908 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:26:15.191227    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:26:15.191376    4908 start.go:241] JoinCluster: &{Name:multinode-20210813202419-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 C
lusterName:multinode-20210813202419-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 20:26:15.191460    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm token create --print-join-command --ttl=0"
	I0813 20:26:15.191480    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:26:15.197254    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:26:15.197612    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:26:15.197640    4908 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:26:15.197794    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:26:15.197950    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:26:15.198078    4908 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:26:15.198178    4908 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:26:15.410711    4908 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token vnwbzt.8w4iune7lflm0jcs --discovery-token-ca-cert-hash sha256:00d93bc1122e8abafdd2223d172c3617c6ca5e75fcbdac147810f69b6f47ae9b 
	I0813 20:26:15.410967    4908 start.go:262] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0813 20:26:15.411008    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token vnwbzt.8w4iune7lflm0jcs --discovery-token-ca-cert-hash sha256:00d93bc1122e8abafdd2223d172c3617c6ca5e75fcbdac147810f69b6f47ae9b --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210813202419-30853-m02"
	I0813 20:26:15.573144    4908 command_runner.go:124] > [preflight] Running pre-flight checks
	I0813 20:26:15.892346    4908 command_runner.go:124] > [preflight] Reading configuration from the cluster...
	I0813 20:26:15.892373    4908 command_runner.go:124] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0813 20:26:15.937758    4908 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 20:26:15.938171    4908 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 20:26:15.938206    4908 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0813 20:26:16.111727    4908 command_runner.go:124] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0813 20:26:22.209577    4908 command_runner.go:124] > This node has joined the cluster:
	I0813 20:26:22.209610    4908 command_runner.go:124] > * Certificate signing request was sent to apiserver and a response was received.
	I0813 20:26:22.209620    4908 command_runner.go:124] > * The Kubelet was informed of the new secure connection details.
	I0813 20:26:22.209631    4908 command_runner.go:124] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0813 20:26:22.211465    4908 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 20:26:22.211595    4908 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token vnwbzt.8w4iune7lflm0jcs --discovery-token-ca-cert-hash sha256:00d93bc1122e8abafdd2223d172c3617c6ca5e75fcbdac147810f69b6f47ae9b --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210813202419-30853-m02": (6.800564814s)
	I0813 20:26:22.211633    4908 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0813 20:26:22.524241    4908 command_runner.go:124] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0813 20:26:22.524278    4908 start.go:243] JoinCluster complete in 7.332902101s
	I0813 20:26:22.524289    4908 cni.go:93] Creating CNI manager for ""
	I0813 20:26:22.524294    4908 cni.go:154] 2 nodes found, recommending kindnet
	I0813 20:26:22.524350    4908 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:26:22.530265    4908 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0813 20:26:22.530295    4908 command_runner.go:124] >   Size: 2853400   	Blocks: 5576       IO Block: 4096   regular file
	I0813 20:26:22.530304    4908 command_runner.go:124] > Device: 10h/16d	Inode: 22875       Links: 1
	I0813 20:26:22.530314    4908 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 20:26:22.530322    4908 command_runner.go:124] > Access: 2021-08-13 20:24:33.325091489 +0000
	I0813 20:26:22.530335    4908 command_runner.go:124] > Modify: 2021-08-10 20:02:08.000000000 +0000
	I0813 20:26:22.530347    4908 command_runner.go:124] > Change: 2021-08-13 20:24:29.381091489 +0000
	I0813 20:26:22.530358    4908 command_runner.go:124] >  Birth: -
	I0813 20:26:22.530415    4908 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:26:22.530428    4908 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:26:22.544043    4908 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:26:22.874980    4908 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0813 20:26:22.878379    4908 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0813 20:26:22.882142    4908 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0813 20:26:22.896367    4908 command_runner.go:124] > daemonset.apps/kindnet configured
	I0813 20:26:22.899822    4908 start.go:226] Will wait 6m0s for node &{Name:m02 IP:192.168.39.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0813 20:26:22.901777    4908 out.go:177] * Verifying Kubernetes components...
	I0813 20:26:22.901873    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:26:22.912851    4908 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:26:22.913078    4908 kapi.go:59] client config for multinode-20210813202419-30853: &rest.Config{Host:"https://192.168.39.64:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202419-308
53/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:26:22.914333    4908 node_ready.go:35] waiting up to 6m0s for node "multinode-20210813202419-30853-m02" to be "Ready" ...
	I0813 20:26:22.914424    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:22.914440    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:22.914447    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:22.914457    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:22.919749    4908 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 20:26:22.919763    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:22.919767    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:22.919771    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:22.919775    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:22.919780    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:22 GMT
	I0813 20:26:22.919785    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:22.920063    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"570","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5415 chars]
	I0813 20:26:23.421063    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:23.421087    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:23.421092    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:23.421097    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:23.424204    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:23.424227    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:23.424234    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:23.424240    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:23.424245    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:23.424249    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:23 GMT
	I0813 20:26:23.424253    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:23.424836    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"570","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5415 chars]
	I0813 20:26:23.921471    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:23.921499    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:23.921508    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:23.921514    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:23.925796    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:26:23.925817    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:23.925824    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:23.925829    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:23.925833    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:23.925838    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:23.925843    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:23 GMT
	I0813 20:26:23.926144    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"570","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5415 chars]
	I0813 20:26:24.421362    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:24.421386    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:24.421392    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:24.421397    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:24.424093    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:24.424117    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:24.424125    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:24.424130    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:24 GMT
	I0813 20:26:24.424135    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:24.424140    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:24.424145    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:24.424475    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:24.920695    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:24.920732    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:24.920739    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:24.920745    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:24.926187    4908 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 20:26:24.926220    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:24.926228    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:24.926233    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:24.926242    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:24.926247    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:24.926252    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:24 GMT
	I0813 20:26:24.926862    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:24.927137    4908 node_ready.go:58] node "multinode-20210813202419-30853-m02" has status "Ready":"False"
	I0813 20:26:25.421491    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:25.421518    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:25.421526    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:25.421532    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:25.425220    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:25.425244    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:25.425250    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:25 GMT
	I0813 20:26:25.425258    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:25.425263    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:25.425268    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:25.425272    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:25.426053    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:25.920691    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:25.920717    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:25.920723    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:25.920728    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:25.923957    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:25.923979    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:25.923984    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:25.923993    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:25.923998    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:25.924005    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:25.924013    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:25 GMT
	I0813 20:26:25.924114    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:26.420612    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:26.420637    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:26.420642    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:26.420646    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:26.424486    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:26.424507    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:26.424512    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:26.424515    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:26.424518    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:26 GMT
	I0813 20:26:26.424521    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:26.424524    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:26.424831    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:26.920469    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:26.920501    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:26.920507    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:26.920511    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:26.923901    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:26.923922    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:26.923929    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:26.923933    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:26.923937    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:26 GMT
	I0813 20:26:26.923942    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:26.923946    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:26.924495    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:27.421253    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:27.421284    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:27.421292    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:27.421298    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:27.425321    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:26:27.425343    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:27.425349    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:27.425352    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:27.425358    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:27.425362    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:27 GMT
	I0813 20:26:27.425365    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:27.426034    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:27.426321    4908 node_ready.go:58] node "multinode-20210813202419-30853-m02" has status "Ready":"False"
	I0813 20:26:27.920529    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:27.920552    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:27.920558    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:27.920562    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:27.924134    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:27.924155    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:27.924160    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:27.924164    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:27.924167    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:27.924170    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:27.924173    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:27 GMT
	I0813 20:26:27.924259    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:28.420575    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:28.420599    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:28.420605    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:28.420610    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:28.424172    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:28.424195    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:28.424202    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:28.424207    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:28 GMT
	I0813 20:26:28.424211    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:28.424215    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:28.424220    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:28.424621    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:28.921346    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:28.921372    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:28.921378    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:28.921382    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:28.927004    4908 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 20:26:28.927025    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:28.927030    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:28.927034    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:28.927038    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:28.927042    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:28.927052    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:28 GMT
	I0813 20:26:28.927765    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:29.421216    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:29.421239    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:29.421245    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:29.421249    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:29.425370    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:26:29.425391    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:29.425397    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:29 GMT
	I0813 20:26:29.425401    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:29.425406    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:29.425410    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:29.425414    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:29.425976    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:29.920964    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:29.920987    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:29.920993    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:29.920997    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:29.924148    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:29.924171    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:29.924178    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:29.924190    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:29.924194    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:29.924199    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:29.924203    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:29 GMT
	I0813 20:26:29.924407    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:29.924739    4908 node_ready.go:58] node "multinode-20210813202419-30853-m02" has status "Ready":"False"
	I0813 20:26:30.420436    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:30.420459    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:30.420471    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:30.420475    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:30.423266    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:30.423283    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:30.423288    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:30.423291    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:30.423294    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:30.423297    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:30.423300    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:30 GMT
	I0813 20:26:30.423978    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:30.920633    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:30.920659    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:30.920665    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:30.920669    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:30.922917    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:30.922938    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:30.922944    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:30.922949    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:30.922953    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:30.922961    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:30.922965    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:30 GMT
	I0813 20:26:30.923119    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:31.420757    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:31.420787    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:31.420795    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:31.420801    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:31.425211    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:26:31.425231    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:31.425237    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:31.425242    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:31.425247    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:31.425251    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:31.425255    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:31 GMT
	I0813 20:26:31.425526    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:31.921229    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:31.921256    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:31.921262    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:31.921266    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:31.926039    4908 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 20:26:31.926062    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:31.926068    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:31 GMT
	I0813 20:26:31.926073    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:31.926076    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:31.926081    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:31.926085    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:31.926692    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"575","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5524 chars]
	I0813 20:26:31.926956    4908 node_ready.go:58] node "multinode-20210813202419-30853-m02" has status "Ready":"False"
	I0813 20:26:32.421422    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:32.421452    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.421458    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.421463    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.425000    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:32.425023    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.425030    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.425035    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.425039    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.425046    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.425050    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.425149    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"596","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{" [truncated 5717 chars]
	I0813 20:26:32.425447    4908 node_ready.go:49] node "multinode-20210813202419-30853-m02" has status "Ready":"True"
	I0813 20:26:32.425469    4908 node_ready.go:38] duration metric: took 9.511109901s waiting for node "multinode-20210813202419-30853-m02" to be "Ready" ...
	I0813 20:26:32.425520    4908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:26:32.425603    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods
	I0813 20:26:32.425615    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.425622    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.425628    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.431919    4908 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0813 20:26:32.431940    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.431946    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.431951    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.431955    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.431960    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.431972    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.433683    4908 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"597"},"items":[{"metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"499","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 66665 chars]
	I0813 20:26:32.435385    4908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.435459    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-58k2l
	I0813 20:26:32.435468    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.435473    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.435477    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.437998    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:32.438011    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.438015    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.438020    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.438024    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.438029    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.438033    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.438439    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-58k2l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0431b736-8284-40c7-9bc4-fcc968e4c41b","resourceVersion":"499","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"37ba4031-6ec9-4a65-9b6c-3f1921a1145a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37ba4031-6ec9-4a65-9b6c-3f1921a1145a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5733 chars]
	I0813 20:26:32.438820    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:32.438837    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.438844    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.438864    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.440694    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:32.440711    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.440716    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.440721    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.440725    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.440729    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.440738    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.441051    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:26:32.441321    4908 pod_ready.go:92] pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:32.441334    4908 pod_ready.go:81] duration metric: took 5.922196ms waiting for pod "coredns-558bd4d5db-58k2l" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.441342    4908 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.441387    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210813202419-30853
	I0813 20:26:32.441395    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.441399    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.441403    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.443185    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:32.443197    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.443201    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.443205    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.443208    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.443211    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.443214    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.443414    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210813202419-30853","namespace":"kube-system","uid":"0e8c51de-4800-4c2d-af81-4f4f197d3cd5","resourceVersion":"491","creationTimestamp":"2021-08-13T20:25:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.64:2379","kubernetes.io/config.hash":"b2e5f07a9c29a3554b1f5628928cde4b","kubernetes.io/config.mirror":"b2e5f07a9c29a3554b1f5628928cde4b","kubernetes.io/config.seen":"2021-08-13T20:25:00.776305134Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5569 chars]
	I0813 20:26:32.443652    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:32.443663    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.443668    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.443672    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.445525    4908 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:32.445541    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.445547    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.445551    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.445555    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.445560    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.445564    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.445981    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:26:32.446268    4908 pod_ready.go:92] pod "etcd-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:32.446284    4908 pod_ready.go:81] duration metric: took 4.935192ms waiting for pod "etcd-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.446300    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.446359    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210813202419-30853
	I0813 20:26:32.446370    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.446376    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.446385    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.450385    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:32.450403    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.450409    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.450414    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.450418    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.450422    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.450426    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.451077    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210813202419-30853","namespace":"kube-system","uid":"53b6207c-cf99-4cb1-b237-0e69df65538b","resourceVersion":"478","creationTimestamp":"2021-08-13T20:25:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.64:8443","kubernetes.io/config.hash":"914dc216865e390473fe61a3bb624cd9","kubernetes.io/config.mirror":"914dc216865e390473fe61a3bb624cd9","kubernetes.io/config.seen":"2021-08-13T20:25:00.776307664Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 7249 chars]
	I0813 20:26:32.451437    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:32.451456    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.451463    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.451469    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.455111    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:32.455124    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.455128    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.455132    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.455135    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.455138    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.455141    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.455868    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:26:32.456073    4908 pod_ready.go:92] pod "kube-apiserver-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:32.456084    4908 pod_ready.go:81] duration metric: took 9.770345ms waiting for pod "kube-apiserver-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.456094    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.456153    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210813202419-30853
	I0813 20:26:32.456164    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.456170    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.456176    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.459155    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:32.459170    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.459176    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.459181    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.459185    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.459190    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.459194    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.459594    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210813202419-30853","namespace":"kube-system","uid":"f1752bba-a132-4093-8ff3-ad48483d468b","resourceVersion":"475","creationTimestamp":"2021-08-13T20:25:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a2845623a5b448da54677ebde58b73a6","kubernetes.io/config.mirror":"a2845623a5b448da54677ebde58b73a6","kubernetes.io/config.seen":"2021-08-13T20:25:00.776309845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi
g.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.s [truncated 6810 chars]
	I0813 20:26:32.459934    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:32.459952    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.459957    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.459963    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.463301    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:32.463316    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.463322    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.463326    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.463330    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.463334    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.463339    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.464040    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:26:32.464345    4908 pod_ready.go:92] pod "kube-controller-manager-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:32.464364    4908 pod_ready.go:81] duration metric: took 8.25283ms waiting for pod "kube-controller-manager-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.464375    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8vgbg" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.621762    4908 request.go:600] Waited for 157.330254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8vgbg
	I0813 20:26:32.621827    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8vgbg
	I0813 20:26:32.621833    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.621839    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.621843    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.629534    4908 round_trippers.go:457] Response Status: 200 OK in 7 milliseconds
	I0813 20:26:32.629560    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.629568    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.629574    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.629579    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.629584    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.629590    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.630528    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8vgbg","generateName":"kube-proxy-","namespace":"kube-system","uid":"c0eacea5-4ed3-4d69-bb88-ffb1496d2245","resourceVersion":"582","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb4b18d1-5cff-490a-b573-900487c4d9e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb4b18d1-5cff-490a-b573-900487c4d9e7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5762 chars]
	I0813 20:26:32.822400    4908 request.go:600] Waited for 191.401336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:32.822476    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853-m02
	I0813 20:26:32.822484    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:32.822492    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.822499    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.827976    4908 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 20:26:32.827998    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:32.828004    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.828009    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:32.828014    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:32.828018    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.828023    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.828529    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853-m02","uid":"c6e5ddc4-9825-4147-bbdf-88091b25ebbc","resourceVersion":"596","creationTimestamp":"2021-08-13T20:26:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{" [truncated 5717 chars]
	I0813 20:26:32.828861    4908 pod_ready.go:92] pod "kube-proxy-8vgbg" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:32.828879    4908 pod_ready.go:81] duration metric: took 364.496122ms waiting for pod "kube-proxy-8vgbg" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:32.828891    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rb42p" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:33.022314    4908 request.go:600] Waited for 193.352212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rb42p
	I0813 20:26:33.022410    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rb42p
	I0813 20:26:33.022423    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:33.022431    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:33.022438    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:33.025497    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:33.025520    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:33.025526    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:33.025531    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:33.025536    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:33.025540    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:33.025546    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:33 GMT
	I0813 20:26:33.025703    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rb42p","generateName":"kube-proxy-","namespace":"kube-system","uid":"5633ede2-5578-4565-97af-b83cf1b25f0d","resourceVersion":"459","creationTimestamp":"2021-08-13T20:25:25Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb4b18d1-5cff-490a-b573-900487c4d9e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb4b18d1-5cff-490a-b573-900487c4d9e7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5758 chars]
	I0813 20:26:33.222506    4908 request.go:600] Waited for 196.354324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:33.222577    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:33.222592    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:33.222599    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:33.222606    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:33.225189    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:33.225203    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:33.225210    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:33.225214    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:33.225217    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:33 GMT
	I0813 20:26:33.225219    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:33.225222    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:33.225367    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:26:33.225616    4908 pod_ready.go:92] pod "kube-proxy-rb42p" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:33.225629    4908 pod_ready.go:81] duration metric: took 396.730944ms waiting for pod "kube-proxy-rb42p" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:33.225639    4908 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:33.422086    4908 request.go:600] Waited for 196.369976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813202419-30853
	I0813 20:26:33.422145    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813202419-30853
	I0813 20:26:33.422151    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:33.422156    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:33.422161    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:33.425701    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:33.425727    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:33.425734    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:33.425739    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:33.425743    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:33.425747    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:33.425751    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:33 GMT
	I0813 20:26:33.426064    4908 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210813202419-30853","namespace":"kube-system","uid":"ed906c56-f110-4e49-aa1c-5e0e0b8cb88c","resourceVersion":"384","creationTimestamp":"2021-08-13T20:25:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e846b027c41f0882917076be3af95ba2","kubernetes.io/config.mirror":"e846b027c41f0882917076be3af95ba2","kubernetes.io/config.seen":"2021-08-13T20:25:00.776286387Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:ku
bernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labe [truncated 4540 chars]
	I0813 20:26:33.621694    4908 request.go:600] Waited for 195.239627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:33.621766    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes/multinode-20210813202419-30853
	I0813 20:26:33.621775    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:33.621784    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:33.621790    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:33.625205    4908 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:33.625224    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:33.625229    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:33.625233    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:33.625236    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:33.625239    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:33.625242    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:33 GMT
	I0813 20:26:33.625559    4908 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6553 chars]
	I0813 20:26:33.625882    4908 pod_ready.go:92] pod "kube-scheduler-multinode-20210813202419-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:33.625897    4908 pod_ready.go:81] duration metric: took 400.25015ms waiting for pod "kube-scheduler-multinode-20210813202419-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:33.625910    4908 pod_ready.go:38] duration metric: took 1.20037363s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:26:33.625929    4908 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:26:33.626000    4908 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:26:33.637129    4908 system_svc.go:56] duration metric: took 11.192763ms WaitForService to wait for kubelet.
	I0813 20:26:33.637152    4908 kubeadm.go:547] duration metric: took 10.737295864s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:26:33.637180    4908 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:26:33.821532    4908 request.go:600] Waited for 184.263745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.64:8443/api/v1/nodes
	I0813 20:26:33.821600    4908 round_trippers.go:432] GET https://192.168.39.64:8443/api/v1/nodes
	I0813 20:26:33.821608    4908 round_trippers.go:438] Request Headers:
	I0813 20:26:33.821615    4908 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:33.821622    4908 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:33.824485    4908 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:33.824506    4908 round_trippers.go:460] Response Headers:
	I0813 20:26:33.824513    4908 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:33 GMT
	I0813 20:26:33.824517    4908 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:33.824521    4908 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:33.824525    4908 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 8ca8ec6a-163d-48d0-b75e-9c24a422a41b
	I0813 20:26:33.824529    4908 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 05ce7549-f5b8-4f29-acb0-c9efafa0feb0
	I0813 20:26:33.825028    4908 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"598"},"items":[{"metadata":{"name":"multinode-20210813202419-30853","uid":"664e4709-27b7-48ee-8660-caab94cd1b40","resourceVersion":"394","creationTimestamp":"2021-08-13T20:25:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202419-30853","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202419-30853","minikube.k8s.io/updated_at":"2021_08_13T20_25_13_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed
-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operatio [truncated 13315 chars]
	I0813 20:26:33.825422    4908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:26:33.825442    4908 node_conditions.go:123] node cpu capacity is 2
	I0813 20:26:33.825454    4908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:26:33.825458    4908 node_conditions.go:123] node cpu capacity is 2
	I0813 20:26:33.825462    4908 node_conditions.go:105] duration metric: took 188.277523ms to run NodePressure ...
	I0813 20:26:33.825472    4908 start.go:231] waiting for startup goroutines ...
	I0813 20:26:33.868409    4908 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:26:33.871186    4908 out.go:177] * Done! kubectl is now configured to use "multinode-20210813202419-30853" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:24:30 UTC, end at Fri 2021-08-13 20:30:46 UTC. --
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.471742965Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b17480e5-3871-4a6d-b0fa-d4b88fd9032e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.556793973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e658a4d9-718f-449d-a42a-2dedbf2afd46 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.556937623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e658a4d9-718f-449d-a42a-2dedbf2afd46 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.557265238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e658a4d9-718f-449d-a42a-2dedbf2afd46 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.594567588Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bc313de8-22df-424e-82a8-8168c38ef1b3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.594707697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bc313de8-22df-424e-82a8-8168c38ef1b3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.594888992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bc313de8-22df-424e-82a8-8168c38ef1b3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.633667626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c862011e-2a74-47d4-a812-296658d1dda4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.633806501Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c862011e-2a74-47d4-a812-296658d1dda4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.633988350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c862011e-2a74-47d4-a812-296658d1dda4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.679516801Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7a0a7ef4-1df7-4903-bed5-6dc6f8abb335 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.679659036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7a0a7ef4-1df7-4903-bed5-6dc6f8abb335 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.679837804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7a0a7ef4-1df7-4903-bed5-6dc6f8abb335 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.722714701Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1d80b43c-b853-4d12-b3c1-ff98c06b6157 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.722852265Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1d80b43c-b853-4d12-b3c1-ff98c06b6157 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.723041532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1d80b43c-b853-4d12-b3c1-ff98c06b6157 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.760487855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c91bdf84-b01d-4c4b-8561-2d769732148a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.760550739Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c91bdf84-b01d-4c4b-8561-2d769732148a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.760737582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c91bdf84-b01d-4c4b-8561-2d769732148a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.792522522Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=448a2457-547c-4f0c-b8db-b1b876f3c2b1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.792662684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=448a2457-547c-4f0c-b8db-b1b876f3c2b1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.793177998Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=448a2457-547c-4f0c-b8db-b1b876f3c2b1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.829324299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c19a9a87-1d7e-4917-b4d2-733d6a4de2e9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.829499793Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c19a9a87-1d7e-4917-b4d2-733d6a4de2e9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:30:46 multinode-20210813202419-30853 crio[2075]: time="2021-08-13 20:30:46.829737304Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bda85b31fadfed8042cbb8c6a06e1901e43dd3217eff8881716feb0594c05d41,PodSandboxId:d03383b45e25809b3d9b8492f68cd019d08b9043636e5ac36e1ff13200823730,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628886399538659232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-nfr5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25a75dcb-606f-4b7d-8767-8d6e54d476b1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e9cb016,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a,PodSandboxId:a31f0c6333a6fc85c69fc15ab3d15f2e3e9c2966a34d5161b4bc9818251cd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628886329756899108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7839155d-5552-45cb-ab31-a243fd82f32e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3b1a2f,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af471efb7740e17dc975163d42ff021fa16ec800d6aaa60a3f011f265f55a99,PodSandboxId:5054e726d59100bdac62fc6d9dca1a21c3f9667caed285ea9f7f61354cca12db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628886329701517551,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hc4k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c73e66e-2ec6-4a1b-a7af-3edb2c517f18,},Annotations:map[string]string{io.kubernetes.container.hash: 276c758d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb,PodSandboxId:368ae7f59fbb5e8a7c9649c04ed799f7afe9dc7e13cf16651d18d6088ca864c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628886328692925423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-58k2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431b736-8284-40c7-9bc4-fcc968e4c41b,},Annotations:map[string]string{io.kubernetes.container.hash: 45fcb713,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e,PodSandboxId:64bd4de6112dfd335f565540746cf665ffd8c6e61c76b07e2f1655d343d1b737,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628886327101843603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb42p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5633ede2-5578-4565-97af-b83cf
1b25f0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42359acb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2,PodSandboxId:022e7f58532b91163c39a771d4bcabe6b8f425deaa119da79dc6c6fcf19cb66b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628886304137158129,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e846b027c41f0882917076be3af9
5ba2,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7,PodSandboxId:ea3b9756a38fbda5eb6788d0530786e0a9542aeb9574f6e5b2ec5308c7765f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628886303932478092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a2845623a5b448da54677ebde58b73a6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc,PodSandboxId:5456dc5ae342a77aef782057d3a34b6bd1304e7af660acb2b4f611ae611412e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628886303880616901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e5f07a9c29a3554b1f5628928cde4b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 547d1563,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23,PodSandboxId:bd5b31b6d14b0276739f5abf313b9df7acd33b514723cc873787beeba6b743b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628886303594268292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813202419-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914dc216865e390473fe61a3bb624cd9,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1ada61c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c19a9a87-1d7e-4917-b4d2-733d6a4de2e9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID
	bda85b31fadfe       docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47   4 minutes ago       Running             busybox                   0                   d03383b45e258
	61e562ffee5c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    5 minutes ago       Running             storage-provisioner       0                   a31f0c6333a6f
	5af471efb7740       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                    5 minutes ago       Running             kindnet-cni               0                   5054e726d5910
	374392d2d0eff       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                    5 minutes ago       Running             coredns                   0                   368ae7f59fbb5
	eb6758efc0050       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                    5 minutes ago       Running             kube-proxy                0                   64bd4de6112df
	131d38cbeff7c       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                    5 minutes ago       Running             kube-scheduler            0                   022e7f58532b9
	ea66bd2fc80e5       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                    5 minutes ago       Running             kube-controller-manager   0                   ea3b9756a38fb
	caa0e4513e736       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                    5 minutes ago       Running             etcd                      0                   5456dc5ae342a
	bbb34d9175340       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                    5 minutes ago       Running             kube-apiserver            0                   bd5b31b6d14b0
	
	* 
	* ==> coredns [374392d2d0eff13b81144741312b8606f27c3eb6640fde15726d48e8ce2fb2cb] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210813202419-30853
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210813202419-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=multinode-20210813202419-30853
	                    minikube.k8s.io/updated_at=2021_08_13T20_25_13_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:25:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210813202419-30853
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:30:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:26:48 +0000   Fri, 13 Aug 2021 20:25:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:26:48 +0000   Fri, 13 Aug 2021 20:25:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:26:48 +0000   Fri, 13 Aug 2021 20:25:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:26:48 +0000   Fri, 13 Aug 2021 20:25:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.64
	  Hostname:    multinode-20210813202419-30853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	System Info:
	  Machine ID:                 023e64e902bf4156830ec265d715a4eb
	  System UUID:                023e64e9-02bf-4156-830e-c265d715a4eb
	  Boot ID:                    9fa698ef-eb72-480d-906a-fb3492960c09
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-nfr5z                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 coredns-558bd4d5db-58k2l                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     5m22s
	  kube-system                 etcd-multinode-20210813202419-30853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m37s
	  kube-system                 kindnet-hc4k2                                             100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m22s
	  kube-system                 kube-apiserver-multinode-20210813202419-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	  kube-system                 kube-controller-manager-multinode-20210813202419-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	  kube-system                 kube-proxy-rb42p                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 kube-scheduler-multinode-20210813202419-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 5m29s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m29s  kubelet     Node multinode-20210813202419-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m29s  kubelet     Node multinode-20210813202419-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m29s  kubelet     Node multinode-20210813202419-30853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m29s  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m23s  kubelet     Node multinode-20210813202419-30853 status is now: NodeReady
	  Normal  Starting                 5m20s  kube-proxy  Starting kube-proxy.
	
	
	Name:               multinode-20210813202419-30853-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210813202419-30853-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:26:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210813202419-30853-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:30:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:26:52 +0000   Fri, 13 Aug 2021 20:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:26:52 +0000   Fri, 13 Aug 2021 20:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:26:52 +0000   Fri, 13 Aug 2021 20:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:26:52 +0000   Fri, 13 Aug 2021 20:26:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    multinode-20210813202419-30853-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	System Info:
	  Machine ID:                 c50713fe18cb469fb76b09ab6d47701b
	  System UUID:                c50713fe-18cb-469f-b76b-09ab6d47701b
	  Boot ID:                    62516a66-415a-4559-89c4-46cc8426dd68
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-g7sjs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kindnet-nhtk5               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m25s
	  kube-system                 kube-proxy-8vgbg            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m26s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m25s (x2 over 4m25s)  kubelet     Node multinode-20210813202419-30853-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s (x2 over 4m25s)  kubelet     Node multinode-20210813202419-30853-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s (x2 over 4m25s)  kubelet     Node multinode-20210813202419-30853-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m22s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                4m15s                  kubelet     Node multinode-20210813202419-30853-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug13 20:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093885] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.759808] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000104] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.321940] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.032447] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.923684] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1732 comm=systemd-network
	[  +1.581105] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006095] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.106824] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +14.449736] systemd-fstab-generator[2163]: Ignoring "noauto" for root device
	[  +0.138675] systemd-fstab-generator[2176]: Ignoring "noauto" for root device
	[  +0.190151] systemd-fstab-generator[2203]: Ignoring "noauto" for root device
	[  +6.959815] systemd-fstab-generator[2407]: Ignoring "noauto" for root device
	[Aug13 20:25] systemd-fstab-generator[2818]: Ignoring "noauto" for root device
	[ +14.141677] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.365438] kauditd_printk_skb: 146 callbacks suppressed
	[Aug13 20:26] NFSD: Unable to end grace period: -110
	
	* 
	* ==> etcd [caa0e4513e7366214b0dbc223f3517d40b1781f37f027579c94f9448f78a2cdc] <==
	* 2021-08-13 20:26:39.182016 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:26:49.181809 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:26:59.182686 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:27:09.184001 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:27:19.185295 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:27:29.182846 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:27:39.182075 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:27:49.182017 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:27:59.181737 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:28:09.181542 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:28:19.181864 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:28:29.182306 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:28:39.182712 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:28:49.181617 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:28:59.181893 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:29:09.182353 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:29:19.181905 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:29:29.181911 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:29:39.182665 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:29:49.183731 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:29:59.182211 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:30:09.181781 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:30:19.182380 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:30:29.181922 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:30:39.182439 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:30:47 up 6 min,  0 users,  load average: 0.27, 0.47, 0.26
	Linux multinode-20210813202419-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23] <==
	* I0813 20:26:26.609647       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:26:26.609822       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:26:26.609842       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:26:59.309208       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:26:59.309429       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:26:59.309470       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:27:30.575079       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:27:30.575394       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:27:30.575417       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:28:03.928641       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:28:03.928819       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:28:03.928843       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:28:41.140416       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:28:41.140530       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:28:41.140562       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:29:24.345558       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:29:24.345746       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:29:24.345764       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:30:09.214317       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:30:09.214402       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:30:09.214426       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:30:40.898303       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:30:40.898426       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:30:40.898437       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0813 20:30:45.886940       1 upgradeaware.go:387] Error proxying data from client to backend: write tcp 192.168.39.64:39076->192.168.39.64:10250: write: broken pipe
	
	* 
	* ==> kube-controller-manager [ea66bd2fc80e5fb885010ef73efa79207284c7b1758fe06e9b4a9bd1901732f7] <==
	* I0813 20:25:24.489278       1 shared_informer.go:247] Caches are synced for HPA 
	I0813 20:25:24.504785       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:25:24.511800       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:25:24.579326       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:25:24.978975       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:25:24.979079       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:25:24.996212       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:25:25.176516       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hc4k2"
	I0813 20:25:25.202607       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rb42p"
	I0813 20:25:25.240652       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-nnsgn"
	I0813 20:25:25.283631       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-58k2l"
	I0813 20:25:25.455922       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:25:25.478464       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-nnsgn"
	I0813 20:25:29.350055       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0813 20:26:22.142527       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210813202419-30853-m02" does not exist
	I0813 20:26:22.192570       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-nhtk5"
	I0813 20:26:22.250198       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8vgbg"
	E0813 20:26:22.309916       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"128941b6-4535-4cf8-99f7-12fec9d1ed4e", ResourceVersion:"496", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764483113, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00255fb78), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00255fb90)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00255fba8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00255fbc0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00252d5a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, Creat
ionTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00255fbd8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexV
olumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00255fbf0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVol
umeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSI
VolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00255fc08), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v
1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00252d5c0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00252d600)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amoun
t{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropag
ation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0025a70e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00259d818), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0007712d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil
), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0025be5d0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00259d860)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition
(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0813 20:26:22.321375       1 range_allocator.go:373] Set node multinode-20210813202419-30853-m02 PodCIDR to [10.244.1.0/24]
	I0813 20:26:24.360327       1 event.go:291] "Event occurred" object="multinode-20210813202419-30853-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210813202419-30853-m02 event: Registered Node multinode-20210813202419-30853-m02 in Controller"
	W0813 20:26:24.360580       1 node_lifecycle_controller.go:1013] Missing timestamp for Node multinode-20210813202419-30853-m02. Assuming now as a timestamp.
	I0813 20:26:34.758654       1 event.go:291] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-84b6686758 to 2"
	I0813 20:26:34.772682       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-g7sjs"
	I0813 20:26:34.779365       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-nfr5z"
	
	* 
	* ==> kube-proxy [eb6758efc00503519cf51518c2ce85c53e1c9ef223f46ee73e5b8ecb0c4ccd1e] <==
	* I0813 20:25:27.511844       1 node.go:172] Successfully retrieved node IP: 192.168.39.64
	I0813 20:25:27.511964       1 server_others.go:140] Detected node IP 192.168.39.64
	W0813 20:25:27.511986       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:25:27.651746       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:25:27.651766       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:25:27.651780       1 server_others.go:212] Using iptables Proxier.
	I0813 20:25:27.652928       1 server.go:643] Version: v1.21.3
	I0813 20:25:27.660952       1 config.go:315] Starting service config controller
	I0813 20:25:27.661041       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:25:27.661061       1 config.go:224] Starting endpoint slice config controller
	I0813 20:25:27.661065       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:25:27.674655       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:25:27.676427       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:25:27.762085       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:25:27.762283       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [131d38cbeff7c2315e723bbfaa62bae740bfb11c373635e6c1b60337b1c256f2] <==
	* E0813 20:25:08.938811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:25:08.938904       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:08.940172       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:25:08.940418       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:08.940472       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:25:08.940518       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:25:08.940562       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:08.940600       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:08.940642       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:25:08.941087       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:25:09.901889       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:09.931424       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:25:10.012721       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:25:10.136085       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:25:10.142736       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:25:10.161741       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:25:10.230611       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:10.234412       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:25:10.262511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:25:10.272680       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:25:10.282797       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:10.313505       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:10.385231       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:25:10.474239       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0813 20:25:12.033299       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:24:30 UTC, end at Fri 2021-08-13 20:30:47 UTC. --
	Aug 13 20:25:25 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:25.407075    2827 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq6xv\" (UniqueName: \"kubernetes.io/projected/13726b49-920a-4c21-9f87-b969630657c6-kube-api-access-bq6xv\") pod \"coredns-558bd4d5db-nnsgn\" (UID: \"13726b49-920a-4c21-9f87-b969630657c6\") "
	Aug 13 20:25:27 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:27.559507    2827 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq6xv\" (UniqueName: \"kubernetes.io/projected/13726b49-920a-4c21-9f87-b969630657c6-kube-api-access-bq6xv\") pod \"13726b49-920a-4c21-9f87-b969630657c6\" (UID: \"13726b49-920a-4c21-9f87-b969630657c6\") "
	Aug 13 20:25:27 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:27.559563    2827 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13726b49-920a-4c21-9f87-b969630657c6-config-volume\") pod \"13726b49-920a-4c21-9f87-b969630657c6\" (UID: \"13726b49-920a-4c21-9f87-b969630657c6\") "
	Aug 13 20:25:27 multinode-20210813202419-30853 kubelet[2827]: W0813 20:25:27.559787    2827 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/13726b49-920a-4c21-9f87-b969630657c6/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 13 20:25:27 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:27.559932    2827 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13726b49-920a-4c21-9f87-b969630657c6-config-volume" (OuterVolumeSpecName: "config-volume") pod "13726b49-920a-4c21-9f87-b969630657c6" (UID: "13726b49-920a-4c21-9f87-b969630657c6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 20:25:27 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:27.572416    2827 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13726b49-920a-4c21-9f87-b969630657c6-kube-api-access-bq6xv" (OuterVolumeSpecName: "kube-api-access-bq6xv") pod "13726b49-920a-4c21-9f87-b969630657c6" (UID: "13726b49-920a-4c21-9f87-b969630657c6"). InnerVolumeSpecName "kube-api-access-bq6xv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 20:25:27 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:27.703823    2827 reconciler.go:319] "Volume detached for volume \"kube-api-access-bq6xv\" (UniqueName: \"kubernetes.io/projected/13726b49-920a-4c21-9f87-b969630657c6-kube-api-access-bq6xv\") on node \"multinode-20210813202419-30853\" DevicePath \"\""
	Aug 13 20:25:27 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:27.703854    2827 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13726b49-920a-4c21-9f87-b969630657c6-config-volume\") on node \"multinode-20210813202419-30853\" DevicePath \"\""
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:28.213056    2827 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:28.312789    2827 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7839155d-5552-45cb-ab31-a243fd82f32e-tmp\") pod \"storage-provisioner\" (UID: \"7839155d-5552-45cb-ab31-a243fd82f32e\") "
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:28.312965    2827 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4srfr\" (UniqueName: \"kubernetes.io/projected/7839155d-5552-45cb-ab31-a243fd82f32e-kube-api-access-4srfr\") pod \"storage-provisioner\" (UID: \"7839155d-5552-45cb-ab31-a243fd82f32e\") "
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/net_cls,net_prio/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/systemd/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/cpu,cpuacct/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/net_cls,net_prio/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/memory/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: time="2021-08-13T20:25:28Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13726b49_920a_4c21_9f87_b969630657c6.slice/crio-40557107bd6f5b7a20954421543ab5c84f888dfda08e495e4eb3ea60002c9e25.scope: device or resource busy"
	Aug 13 20:25:28 multinode-20210813202419-30853 kubelet[2827]: E0813 20:25:28.642224    2827 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/8c73e66e-2ec6-4a1b-a7af-3edb2c517f18/etc-hosts with error exit status 1" pod="kube-system/kindnet-hc4k2"
	Aug 13 20:25:29 multinode-20210813202419-30853 kubelet[2827]: E0813 20:25:29.121662    2827 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = open /proc/3411/stat: no such file or directory: container process not found"
	Aug 13 20:25:29 multinode-20210813202419-30853 kubelet[2827]: E0813 20:25:29.121801    2827 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = open /proc/3411/stat: no such file or directory: container process not found" pod="kube-system/coredns-558bd4d5db-nnsgn"
	Aug 13 20:25:30 multinode-20210813202419-30853 kubelet[2827]: I0813 20:25:30.653649    2827 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 13 20:26:34 multinode-20210813202419-30853 kubelet[2827]: I0813 20:26:34.796437    2827 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:26:34 multinode-20210813202419-30853 kubelet[2827]: I0813 20:26:34.980987    2827 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qspch\" (UniqueName: \"kubernetes.io/projected/25a75dcb-606f-4b7d-8767-8d6e54d476b1-kube-api-access-qspch\") pod \"busybox-84b6686758-nfr5z\" (UID: \"25a75dcb-606f-4b7d-8767-8d6e54d476b1\") "
	
	* 
	* ==> storage-provisioner [61e562ffee5c9435fcd29f13a8ee2941d46cd114c29e77a465c9d3c827d71a1a] <==
	* I0813 20:25:29.920583       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:25:29.940741       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:25:29.941291       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:25:29.960266       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:25:29.962081       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf84d585-93d6-4de2-bf54-ae6b01640a94", APIVersion:"v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210813202419-30853_ea188996-5464-4a66-8fe1-fc426d592470 became leader
	I0813 20:25:29.966951       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210813202419-30853_ea188996-5464-4a66-8fe1-fc426d592470!
	I0813 20:25:30.069775       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210813202419-30853_ea188996-5464-4a66-8fe1-fc426d592470!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20210813202419-30853 -n multinode-20210813202419-30853
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-20210813202419-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context multinode-20210813202419-30853 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context multinode-20210813202419-30853 describe pod : exit status 1 (52.761531ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context multinode-20210813202419-30853 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (63.48s)

                                                
                                    
x
+
TestPreload (203.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210813204102-30853 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.17.0
E0813 20:41:07.715696   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:41:53.558780   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 20:42:30.759159   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210813204102-30853 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.17.0: (2m35.276279222s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210813204102-30853 -- sudo crictl pull busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20210813204102-30853 -- sudo crictl pull busybox: (3.697732404s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210813204102-30853 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210813204102-30853 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.17.3: (40.531178384s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210813204102-30853 -- sudo crictl image ls
preload_test.go:85: Expected to find busybox in output of `docker images`, instead got 
-- stdout --
	IMAGE               TAG                 IMAGE ID            SIZE

                                                
                                                
-- /stdout --
panic.go:613: *** TestPreload FAILED at 2021-08-13 20:44:22.653628689 +0000 UTC m=+2194.658193823
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-20210813204102-30853 -n test-preload-20210813204102-30853
helpers_test.go:245: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-20210813204102-30853 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p test-preload-20210813204102-30853 logs -n 25: (1.463172911s)
helpers_test.go:253: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                             |              Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| kubectl | -p                                                          | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:30:45 UTC | Fri, 13 Aug 2021 20:30:45 UTC |
	|         | multinode-20210813202419-30853                              |                                    |         |         |                               |                               |
	|         | -- exec                                                     |                                    |         |         |                               |                               |
	|         | busybox-84b6686758-nfr5z                                    |                                    |         |         |                               |                               |
	|         | -- sh -c nslookup                                           |                                    |         |         |                               |                               |
	|         | host.minikube.internal | awk                                |                                    |         |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                                     |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202419-30853                              | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:30:46 UTC | Fri, 13 Aug 2021 20:30:47 UTC |
	|         | logs -n 25                                                  |                                    |         |         |                               |                               |
	| node    | add -p                                                      | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:30:48 UTC | Fri, 13 Aug 2021 20:31:42 UTC |
	|         | multinode-20210813202419-30853                              |                                    |         |         |                               |                               |
	|         | -v 3 --alsologtostderr                                      |                                    |         |         |                               |                               |
	| profile | list --output json                                          | minikube                           | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:31:43 UTC | Fri, 13 Aug 2021 20:31:43 UTC |
	| -p      | multinode-20210813202419-30853                              | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:31:44 UTC | Fri, 13 Aug 2021 20:31:44 UTC |
	|         | cp testdata/cp-test.txt                                     |                                    |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                    |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202419-30853                              | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:31:44 UTC | Fri, 13 Aug 2021 20:31:44 UTC |
	|         | ssh sudo cat                                                |                                    |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                    |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202419-30853 cp testdata/cp-test.txt      | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:31:44 UTC | Fri, 13 Aug 2021 20:31:44 UTC |
	|         | multinode-20210813202419-30853-m02:/home/docker/cp-test.txt |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202419-30853                              | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:31:44 UTC | Fri, 13 Aug 2021 20:31:44 UTC |
	|         | ssh -n                                                      |                                    |         |         |                               |                               |
	|         | multinode-20210813202419-30853-m02                          |                                    |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                           |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202419-30853 cp testdata/cp-test.txt      | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:31:44 UTC | Fri, 13 Aug 2021 20:31:45 UTC |
	|         | multinode-20210813202419-30853-m03:/home/docker/cp-test.txt |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202419-30853                              | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:31:45 UTC | Fri, 13 Aug 2021 20:31:45 UTC |
	|         | ssh -n                                                      |                                    |         |         |                               |                               |
	|         | multinode-20210813202419-30853-m03                          |                                    |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                           |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202419-30853                              | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:31:45 UTC | Fri, 13 Aug 2021 20:31:47 UTC |
	|         | node stop m03                                               |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202419-30853                              | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:31:48 UTC | Fri, 13 Aug 2021 20:32:36 UTC |
	|         | node start m03                                              |                                    |         |         |                               |                               |
	|         | --alsologtostderr                                           |                                    |         |         |                               |                               |
	| stop    | -p                                                          | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:32:37 UTC | Fri, 13 Aug 2021 20:32:44 UTC |
	|         | multinode-20210813202419-30853                              |                                    |         |         |                               |                               |
	| start   | -p                                                          | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:32:44 UTC | Fri, 13 Aug 2021 20:35:39 UTC |
	|         | multinode-20210813202419-30853                              |                                    |         |         |                               |                               |
	|         | --wait=true -v=8                                            |                                    |         |         |                               |                               |
	|         | --alsologtostderr                                           |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202419-30853                              | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:35:39 UTC | Fri, 13 Aug 2021 20:35:40 UTC |
	|         | node delete m03                                             |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202419-30853                              | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:35:41 UTC | Fri, 13 Aug 2021 20:35:45 UTC |
	|         | stop                                                        |                                    |         |         |                               |                               |
	| start   | -p                                                          | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:35:45 UTC | Fri, 13 Aug 2021 20:38:16 UTC |
	|         | multinode-20210813202419-30853                              |                                    |         |         |                               |                               |
	|         | --wait=true -v=8                                            |                                    |         |         |                               |                               |
	|         | --alsologtostderr                                           |                                    |         |         |                               |                               |
	|         | --driver=kvm2                                               |                                    |         |         |                               |                               |
	|         | --container-runtime=crio                                    |                                    |         |         |                               |                               |
	| start   | -p                                                          | multinode-20210813202419-30853-m03 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:17 UTC | Fri, 13 Aug 2021 20:39:13 UTC |
	|         | multinode-20210813202419-30853-m03                          |                                    |         |         |                               |                               |
	|         | --driver=kvm2                                               |                                    |         |         |                               |                               |
	|         | --container-runtime=crio                                    |                                    |         |         |                               |                               |
	| delete  | -p                                                          | multinode-20210813202419-30853-m03 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:13 UTC | Fri, 13 Aug 2021 20:39:14 UTC |
	|         | multinode-20210813202419-30853-m03                          |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202419-30853                              | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:14 UTC | Fri, 13 Aug 2021 20:39:16 UTC |
	|         | logs -n 25                                                  |                                    |         |         |                               |                               |
	| delete  | -p                                                          | multinode-20210813202419-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:16 UTC | Fri, 13 Aug 2021 20:39:18 UTC |
	|         | multinode-20210813202419-30853                              |                                    |         |         |                               |                               |
	| start   | -p                                                          | test-preload-20210813204102-30853  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:02 UTC | Fri, 13 Aug 2021 20:43:38 UTC |
	|         | test-preload-20210813204102-30853                           |                                    |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                             |                                    |         |         |                               |                               |
	|         | --wait=true --preload=false                                 |                                    |         |         |                               |                               |
	|         | --driver=kvm2                                               |                                    |         |         |                               |                               |
	|         | --container-runtime=crio                                    |                                    |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0                                |                                    |         |         |                               |                               |
	| ssh     | -p                                                          | test-preload-20210813204102-30853  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:38 UTC | Fri, 13 Aug 2021 20:43:41 UTC |
	|         | test-preload-20210813204102-30853                           |                                    |         |         |                               |                               |
	|         | -- sudo crictl pull busybox                                 |                                    |         |         |                               |                               |
	| start   | -p                                                          | test-preload-20210813204102-30853  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:41 UTC | Fri, 13 Aug 2021 20:44:22 UTC |
	|         | test-preload-20210813204102-30853                           |                                    |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                             |                                    |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2                              |                                    |         |         |                               |                               |
	|         |  --container-runtime=crio                                   |                                    |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3                                |                                    |         |         |                               |                               |
	| ssh     | -p                                                          | test-preload-20210813204102-30853  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:22 UTC | Fri, 13 Aug 2021 20:44:22 UTC |
	|         | test-preload-20210813204102-30853                           |                                    |         |         |                               |                               |
	|         | -- sudo crictl image ls                                     |                                    |         |         |                               |                               |
	|---------|-------------------------------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:43:41
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:43:41.937219   32742 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:43:41.937576   32742 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:43:41.937593   32742 out.go:311] Setting ErrFile to fd 2...
	I0813 20:43:41.937600   32742 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:43:41.937863   32742 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:43:41.938512   32742 out.go:305] Setting JSON to false
	I0813 20:43:41.976026   32742 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":8784,"bootTime":1628878638,"procs":166,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:43:41.976134   32742 start.go:121] virtualization: kvm guest
	I0813 20:43:41.978471   32742 out.go:177] * [test-preload-20210813204102-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:43:41.979920   32742 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:43:41.978618   32742 notify.go:169] Checking for updates...
	I0813 20:43:41.981370   32742 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:43:41.982808   32742 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:43:41.984142   32742 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:43:41.984510   32742 config.go:177] Loaded profile config "test-preload-20210813204102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0813 20:43:41.984898   32742 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:43:41.984953   32742 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:43:41.995367   32742 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33801
	I0813 20:43:41.995814   32742 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:43:41.996253   32742 main.go:130] libmachine: Using API Version  1
	I0813 20:43:41.996275   32742 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:43:41.996687   32742 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:43:41.996876   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .DriverName
	I0813 20:43:41.998527   32742 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0813 20:43:41.998565   32742 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:43:41.998916   32742 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:43:41.998951   32742 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:43:42.011293   32742 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0813 20:43:42.011692   32742 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:43:42.012152   32742 main.go:130] libmachine: Using API Version  1
	I0813 20:43:42.012177   32742 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:43:42.012525   32742 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:43:42.012733   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .DriverName
	I0813 20:43:42.041767   32742 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 20:43:42.041792   32742 start.go:278] selected driver: kvm2
	I0813 20:43:42.041799   32742 start.go:751] validating driver "kvm2" against &{Name:test-preload-20210813204102-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.17.0 ClusterName:test-preload-20210813204102-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:43:42.041895   32742 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:43:42.042867   32742 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:42.043011   32742 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:43:42.054010   32742 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:43:42.054301   32742 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:43:42.054324   32742 cni.go:93] Creating CNI manager for ""
	I0813 20:43:42.054333   32742 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:43:42.054341   32742 start_flags.go:277] config:
	{Name:test-preload-20210813204102-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210813204102-30853 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:43:42.054449   32742 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:42.056446   32742 out.go:177] * Starting control plane node test-preload-20210813204102-30853 in cluster test-preload-20210813204102-30853
	I0813 20:43:42.056471   32742 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	W0813 20:43:42.124552   32742 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.17.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0813 20:43:42.124694   32742 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813204102-30853/config.json ...
	I0813 20:43:42.124831   32742 cache.go:108] acquiring lock: {Name:mk46180cf67d5c541fa2597ef8e0122b51c3d66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:42.124848   32742 cache.go:108] acquiring lock: {Name:mk9c5536a1a5337f8f1114ba3c5dcf18facbff5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:42.124880   32742 cache.go:108] acquiring lock: {Name:mk31f481dcd851cdb1edf94de55f6dd623487498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:42.124894   32742 cache.go:108] acquiring lock: {Name:mk5e756c0190c8a45e8f7c706b8aa2ad2059dbed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:42.124944   32742 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 20:43:42.124938   32742 cache.go:108] acquiring lock: {Name:mkec6e53ab9796f80ec65d6b99a6c3ee881fedd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:42.124961   32742 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0813 20:43:42.124967   32742 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 143.542µs
	I0813 20:43:42.124986   32742 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 20:43:42.124982   32742 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 143.165µs
	I0813 20:43:42.124917   32742 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:43:42.124955   32742 cache.go:108] acquiring lock: {Name:mk8abd6937dc27237690534518d6df7fe6b7647d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:42.125028   32742 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 20:43:42.125012   32742 cache.go:108] acquiring lock: {Name:mk936b3ccd2d330dec6a93c3a9dd4ec7c8734554 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:42.125049   32742 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 114.342µs
	I0813 20:43:42.125064   32742 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 20:43:42.124953   32742 cache.go:108] acquiring lock: {Name:mk0a77f622fcfcaa441daa474c08ccefb09586ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:42.125075   32742 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 exists
	I0813 20:43:42.125092   32742 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
	I0813 20:43:42.125032   32742 start.go:313] acquiring machines lock for test-preload-20210813204102-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:43:42.125114   32742 cache.go:97] cache image "k8s.gcr.io/pause:3.1" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 103.975µs
	I0813 20:43:42.125126   32742 cache.go:81] save to tar file k8s.gcr.io/pause:3.1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
	I0813 20:43:42.125065   32742 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 20:43:42.125076   32742 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 20:43:42.125143   32742 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
	I0813 20:43:42.125158   32742 start.go:317] acquired machines lock for "test-preload-20210813204102-30853" in 38.966µs
	I0813 20:43:42.125182   32742 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:43:42.125193   32742 fix.go:55] fixHost starting: 
	I0813 20:43:42.124994   32742 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0813 20:43:42.125112   32742 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5" took 176.598µs
	I0813 20:43:42.125227   32742 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 succeeded
	I0813 20:43:42.124977   32742 cache.go:108] acquiring lock: {Name:mke6b134078f16ec1e9750c1963262ca4e1f7667 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:42.125447   32742 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 20:43:42.125627   32742 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:43:42.125675   32742 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:43:42.125703   32742 cache.go:108] acquiring lock: {Name:mkf1d6f5d79a8fed4d2cc99505f5f3464b88e46a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:42.125833   32742 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 20:43:42.125855   32742 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 669.517µs
	I0813 20:43:42.125883   32742 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 20:43:42.126161   32742 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:43:42.126167   32742 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:43:42.126161   32742 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:43:42.126356   32742 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:43:42.137033   32742 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36097
	I0813 20:43:42.137403   32742 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:43:42.137913   32742 main.go:130] libmachine: Using API Version  1
	I0813 20:43:42.137958   32742 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:43:42.138275   32742 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:43:42.138430   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .DriverName
	I0813 20:43:42.138573   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetState
	I0813 20:43:42.141599   32742 fix.go:108] recreateIfNeeded on test-preload-20210813204102-30853: state=Running err=<nil>
	W0813 20:43:42.141643   32742 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:43:42.143885   32742 out.go:177] * Updating the running kvm2 "test-preload-20210813204102-30853" VM ...
	I0813 20:43:42.143917   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .DriverName
	I0813 20:43:42.144086   32742 machine.go:88] provisioning docker machine ...
	I0813 20:43:42.144109   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .DriverName
	I0813 20:43:42.144250   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetMachineName
	I0813 20:43:42.144365   32742 buildroot.go:166] provisioning hostname "test-preload-20210813204102-30853"
	I0813 20:43:42.144385   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetMachineName
	I0813 20:43:42.144488   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHHostname
	I0813 20:43:42.148981   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:42.149364   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:74:73", ip: ""} in network mk-test-preload-20210813204102-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:41:20 +0000 UTC Type:0 Mac:52:54:00:53:74:73 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-20210813204102-30853 Clientid:01:52:54:00:53:74:73}
	I0813 20:43:42.149393   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined IP address 192.168.39.171 and MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:42.149493   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHPort
	I0813 20:43:42.149639   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHKeyPath
	I0813 20:43:42.149753   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHKeyPath
	I0813 20:43:42.149856   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHUsername
	I0813 20:43:42.149952   32742 main.go:130] libmachine: Using SSH client type: native
	I0813 20:43:42.150084   32742 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0813 20:43:42.150103   32742 main.go:130] libmachine: About to run SSH command:
	sudo hostname test-preload-20210813204102-30853 && echo "test-preload-20210813204102-30853" | sudo tee /etc/hostname
	I0813 20:43:42.293447   32742 main.go:130] libmachine: SSH cmd err, output: <nil>: test-preload-20210813204102-30853
	
	I0813 20:43:42.293472   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHHostname
	I0813 20:43:42.298813   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:42.299172   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:74:73", ip: ""} in network mk-test-preload-20210813204102-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:41:20 +0000 UTC Type:0 Mac:52:54:00:53:74:73 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-20210813204102-30853 Clientid:01:52:54:00:53:74:73}
	I0813 20:43:42.299209   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined IP address 192.168.39.171 and MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:42.299338   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHPort
	I0813 20:43:42.299497   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHKeyPath
	I0813 20:43:42.299619   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHKeyPath
	I0813 20:43:42.299719   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHUsername
	I0813 20:43:42.299875   32742 main.go:130] libmachine: Using SSH client type: native
	I0813 20:43:42.300018   32742 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0813 20:43:42.300038   32742 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20210813204102-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20210813204102-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20210813204102-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:43:42.428601   32742 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:43:42.428637   32742 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:43:42.428656   32742 buildroot.go:174] setting up certificates
	I0813 20:43:42.428667   32742 provision.go:83] configureAuth start
	I0813 20:43:42.428678   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetMachineName
	I0813 20:43:42.429030   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetIP
	I0813 20:43:42.434196   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:42.434519   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:74:73", ip: ""} in network mk-test-preload-20210813204102-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:41:20 +0000 UTC Type:0 Mac:52:54:00:53:74:73 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-20210813204102-30853 Clientid:01:52:54:00:53:74:73}
	I0813 20:43:42.434548   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined IP address 192.168.39.171 and MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:42.434654   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHHostname
	I0813 20:43:42.438806   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:42.439165   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:74:73", ip: ""} in network mk-test-preload-20210813204102-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:41:20 +0000 UTC Type:0 Mac:52:54:00:53:74:73 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-20210813204102-30853 Clientid:01:52:54:00:53:74:73}
	I0813 20:43:42.439193   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined IP address 192.168.39.171 and MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:42.439239   32742 provision.go:138] copyHostCerts
	I0813 20:43:42.439303   32742 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:43:42.439321   32742 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:43:42.439381   32742 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:43:42.439494   32742 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:43:42.439507   32742 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:43:42.439548   32742 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:43:42.439621   32742 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:43:42.439633   32742 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:43:42.439663   32742 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:43:42.439717   32742 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.test-preload-20210813204102-30853 san=[192.168.39.171 192.168.39.171 localhost 127.0.0.1 minikube test-preload-20210813204102-30853]
	I0813 20:43:42.478801   32742 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3
	I0813 20:43:42.478882   32742 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3
	I0813 20:43:42.483015   32742 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3
	I0813 20:43:42.492371   32742 provision.go:172] copyRemoteCerts
	I0813 20:43:42.492414   32742 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:43:42.492445   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHHostname
	I0813 20:43:42.497434   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:42.497760   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:74:73", ip: ""} in network mk-test-preload-20210813204102-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:41:20 +0000 UTC Type:0 Mac:52:54:00:53:74:73 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-20210813204102-30853 Clientid:01:52:54:00:53:74:73}
	I0813 20:43:42.497794   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined IP address 192.168.39.171 and MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:42.497921   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHPort
	I0813 20:43:42.498073   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHKeyPath
	I0813 20:43:42.498203   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHUsername
	I0813 20:43:42.498349   32742 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/test-preload-20210813204102-30853/id_rsa Username:docker}
	I0813 20:43:42.508160   32742 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3
	I0813 20:43:42.590159   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:43:42.609461   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 20:43:42.632836   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:43:42.654612   32742 provision.go:86] duration metric: configureAuth took 225.933202ms
	I0813 20:43:42.654637   32742 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:43:42.654809   32742 config.go:177] Loaded profile config "test-preload-20210813204102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.3
	I0813 20:43:42.654958   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHHostname
	I0813 20:43:42.660937   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:42.661325   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:74:73", ip: ""} in network mk-test-preload-20210813204102-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:41:20 +0000 UTC Type:0 Mac:52:54:00:53:74:73 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-20210813204102-30853 Clientid:01:52:54:00:53:74:73}
	I0813 20:43:42.661352   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined IP address 192.168.39.171 and MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:42.661527   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHPort
	I0813 20:43:42.661697   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHKeyPath
	I0813 20:43:42.661867   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHKeyPath
	I0813 20:43:42.662020   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHUsername
	I0813 20:43:42.662164   32742 main.go:130] libmachine: Using SSH client type: native
	I0813 20:43:42.662341   32742 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0813 20:43:42.662369   32742 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:43:43.224447   32742 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 exists
	I0813 20:43:43.224502   32742 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3" took 1.099522955s
	I0813 20:43:43.224521   32742 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 succeeded
	I0813 20:43:43.371114   32742 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 exists
	I0813 20:43:43.371160   32742 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3" took 1.246282708s
	I0813 20:43:43.371180   32742 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 succeeded
	I0813 20:43:43.391055   32742 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 exists
	I0813 20:43:43.391104   32742 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3" took 1.266247714s
	I0813 20:43:43.391122   32742 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 succeeded
	I0813 20:43:43.942543   32742 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 exists
	I0813 20:43:43.942610   32742 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3" took 1.817657094s
	I0813 20:43:43.942629   32742 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 succeeded
	I0813 20:43:43.942678   32742 cache.go:88] Successfully saved all images to host disk.
	I0813 20:43:43.971073   32742 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:43:43.971107   32742 machine.go:91] provisioned docker machine in 1.827002695s
	I0813 20:43:43.971118   32742 start.go:267] post-start starting for "test-preload-20210813204102-30853" (driver="kvm2")
	I0813 20:43:43.971123   32742 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:43:43.971138   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .DriverName
	I0813 20:43:43.971441   32742 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:43:43.971478   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHHostname
	I0813 20:43:43.976888   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:43.977160   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:74:73", ip: ""} in network mk-test-preload-20210813204102-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:41:20 +0000 UTC Type:0 Mac:52:54:00:53:74:73 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-20210813204102-30853 Clientid:01:52:54:00:53:74:73}
	I0813 20:43:43.977188   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined IP address 192.168.39.171 and MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:43.977349   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHPort
	I0813 20:43:43.977524   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHKeyPath
	I0813 20:43:43.977642   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHUsername
	I0813 20:43:43.977792   32742 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/test-preload-20210813204102-30853/id_rsa Username:docker}
	I0813 20:43:44.070725   32742 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:43:44.075181   32742 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:43:44.075209   32742 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:43:44.075277   32742 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:43:44.075417   32742 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 20:43:44.075537   32742 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:43:44.082725   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:43:44.102931   32742 start.go:270] post-start completed in 131.801256ms
	I0813 20:43:44.102969   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .DriverName
	I0813 20:43:44.103212   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHHostname
	I0813 20:43:44.108249   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:44.108592   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:74:73", ip: ""} in network mk-test-preload-20210813204102-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:41:20 +0000 UTC Type:0 Mac:52:54:00:53:74:73 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-20210813204102-30853 Clientid:01:52:54:00:53:74:73}
	I0813 20:43:44.108620   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined IP address 192.168.39.171 and MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:44.108753   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHPort
	I0813 20:43:44.108938   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHKeyPath
	I0813 20:43:44.109093   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHKeyPath
	I0813 20:43:44.109218   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHUsername
	I0813 20:43:44.109355   32742 main.go:130] libmachine: Using SSH client type: native
	I0813 20:43:44.109494   32742 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0813 20:43:44.109505   32742 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 20:43:44.236616   32742 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628887424.237522635
	
	I0813 20:43:44.236640   32742 fix.go:212] guest clock: 1628887424.237522635
	I0813 20:43:44.236650   32742 fix.go:225] Guest: 2021-08-13 20:43:44.237522635 +0000 UTC Remote: 2021-08-13 20:43:44.10319851 +0000 UTC m=+2.211213602 (delta=134.324125ms)
	I0813 20:43:44.236676   32742 fix.go:196] guest clock delta is within tolerance: 134.324125ms
	I0813 20:43:44.236684   32742 fix.go:57] fixHost completed within 2.111491559s
	I0813 20:43:44.236694   32742 start.go:80] releasing machines lock for "test-preload-20210813204102-30853", held for 2.111523548s
	I0813 20:43:44.236739   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .DriverName
	I0813 20:43:44.237005   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetIP
	I0813 20:43:44.242008   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:44.242286   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:74:73", ip: ""} in network mk-test-preload-20210813204102-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:41:20 +0000 UTC Type:0 Mac:52:54:00:53:74:73 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-20210813204102-30853 Clientid:01:52:54:00:53:74:73}
	I0813 20:43:44.242321   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined IP address 192.168.39.171 and MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:44.242457   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .DriverName
	I0813 20:43:44.242619   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .DriverName
	I0813 20:43:44.243063   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .DriverName
	I0813 20:43:44.243263   32742 ssh_runner.go:149] Run: systemctl --version
	I0813 20:43:44.243288   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHHostname
	I0813 20:43:44.243315   32742 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:43:44.243354   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHHostname
	I0813 20:43:44.248021   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:44.248392   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:74:73", ip: ""} in network mk-test-preload-20210813204102-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:41:20 +0000 UTC Type:0 Mac:52:54:00:53:74:73 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-20210813204102-30853 Clientid:01:52:54:00:53:74:73}
	I0813 20:43:44.248426   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined IP address 192.168.39.171 and MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:44.248517   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHPort
	I0813 20:43:44.248650   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHKeyPath
	I0813 20:43:44.248780   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHUsername
	I0813 20:43:44.248911   32742 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/test-preload-20210813204102-30853/id_rsa Username:docker}
	I0813 20:43:44.249104   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:44.249455   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:74:73", ip: ""} in network mk-test-preload-20210813204102-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:41:20 +0000 UTC Type:0 Mac:52:54:00:53:74:73 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-20210813204102-30853 Clientid:01:52:54:00:53:74:73}
	I0813 20:43:44.249487   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined IP address 192.168.39.171 and MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:44.249581   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHPort
	I0813 20:43:44.249706   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHKeyPath
	I0813 20:43:44.249805   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHUsername
	I0813 20:43:44.249930   32742 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/test-preload-20210813204102-30853/id_rsa Username:docker}
	I0813 20:43:44.341390   32742 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	I0813 20:43:44.341463   32742 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:43:44.358273   32742 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:43:44.368322   32742 docker.go:153] disabling docker service ...
	I0813 20:43:44.368370   32742 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:43:44.379118   32742 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:43:44.389588   32742 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:43:44.597221   32742 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:43:44.797866   32742 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:43:44.809253   32742 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:43:44.823738   32742 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.1"|' -i /etc/crio/crio.conf"
	I0813 20:43:44.831411   32742 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:43:44.838187   32742 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:43:44.844606   32742 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:43:45.036747   32742 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:43:45.174871   32742 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:43:45.174946   32742 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:43:45.183123   32742 start.go:413] Will wait 60s for crictl version
	I0813 20:43:45.183183   32742 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:43:45.214816   32742 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 20:43:45.214904   32742 ssh_runner.go:149] Run: crio --version
	I0813 20:43:45.331731   32742 ssh_runner.go:149] Run: crio --version
	I0813 20:43:45.437647   32742 out.go:177] * Preparing Kubernetes v1.17.3 on CRI-O 1.20.2 ...
	I0813 20:43:45.437695   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetIP
	I0813 20:43:45.442868   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:45.443214   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:74:73", ip: ""} in network mk-test-preload-20210813204102-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:41:20 +0000 UTC Type:0 Mac:52:54:00:53:74:73 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-20210813204102-30853 Clientid:01:52:54:00:53:74:73}
	I0813 20:43:45.443246   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined IP address 192.168.39.171 and MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:43:45.443440   32742 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 20:43:45.448862   32742 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	I0813 20:43:45.448902   32742 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:43:45.497597   32742 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.17.3". assuming images are not preloaded.
	I0813 20:43:45.497622   32742 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.3 k8s.gcr.io/kube-controller-manager:v1.17.3 k8s.gcr.io/kube-scheduler:v1.17.3 k8s.gcr.io/kube-proxy:v1.17.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0813 20:43:45.497673   32742 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:43:45.497707   32742 image.go:133] retrieving image: k8s.gcr.io/pause:3.1
	I0813 20:43:45.497726   32742 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 20:43:45.497773   32742 image.go:133] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0813 20:43:45.497782   32742 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 20:43:45.497837   32742 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
	I0813 20:43:45.497864   32742 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:43:45.497914   32742 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 20:43:45.497962   32742 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0813 20:43:45.498115   32742 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:43:45.498637   32742 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:43:45.498928   32742 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:43:45.498930   32742 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:43:45.498928   32742 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:43:45.520743   32742 image.go:171] found k8s.gcr.io/pause:3.1 locally: &{Image:0xc000634160}
	I0813 20:43:45.520824   32742 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0813 20:43:45.820159   32742 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.3
	I0813 20:43:45.820238   32742 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc0006340a0}
	I0813 20:43:45.820356   32742 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:43:45.820971   32742 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 20:43:45.825179   32742 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 20:43:45.832604   32742 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 20:43:45.982743   32742 image.go:171] found k8s.gcr.io/coredns:1.6.5 locally: &{Image:0xc0006341e0}
	I0813 20:43:45.982848   32742 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0813 20:43:45.988450   32742 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc000634680}
	I0813 20:43:45.988522   32742 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:43:46.624666   32742 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.17.3" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.3" does not exist at hash "ae853e93800dc2572aeb425e5765cf9b25212bfc43695299e61dece06cffa4a1" in container runtime
	I0813 20:43:46.624754   32742 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.17.3" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.3" does not exist at hash "90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b" in container runtime
	I0813 20:43:46.624783   32742 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 20:43:46.624793   32742 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.17.3
	I0813 20:43:46.624832   32742 ssh_runner.go:149] Run: which crictl
	I0813 20:43:46.624837   32742 ssh_runner.go:149] Run: which crictl
	I0813 20:43:46.624711   32742 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.17.3" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.3" does not exist at hash "b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302" in container runtime
	I0813 20:43:46.624898   32742 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 20:43:46.624939   32742 ssh_runner.go:149] Run: which crictl
	I0813 20:43:46.669306   32742 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.17.3" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.3" does not exist at hash "d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad" in container runtime
	I0813 20:43:46.669363   32742 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 20:43:46.669425   32742 ssh_runner.go:149] Run: which crictl
	I0813 20:43:46.710614   32742 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-proxy:v1.17.3
	I0813 20:43:46.710698   32742 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 20:43:46.710745   32742 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 20:43:46.710704   32742 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 20:43:46.781432   32742 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3
	I0813 20:43:46.781535   32742 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.3
	I0813 20:43:46.802818   32742 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3
	I0813 20:43:46.802905   32742 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3
	I0813 20:43:46.802930   32742 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0813 20:43:46.802986   32742 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0813 20:43:46.803049   32742 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.3': No such file or directory
	I0813 20:43:46.803074   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 --> /var/lib/minikube/images/kube-proxy_v1.17.3 (48706048 bytes)
	I0813 20:43:46.803182   32742 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3
	I0813 20:43:46.803238   32742 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0813 20:43:46.810110   32742 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.3': No such file or directory
	I0813 20:43:46.810142   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 --> /var/lib/minikube/images/kube-scheduler_v1.17.3 (33822208 bytes)
	I0813 20:43:46.822513   32742 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.3': No such file or directory
	I0813 20:43:46.822547   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 --> /var/lib/minikube/images/kube-apiserver_v1.17.3 (50635776 bytes)
	I0813 20:43:46.822647   32742 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.3': No such file or directory
	I0813 20:43:46.822667   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 --> /var/lib/minikube/images/kube-controller-manager_v1.17.3 (48810496 bytes)
	I0813 20:43:47.925533   32742 crio.go:191] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0813 20:43:47.925620   32742 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0813 20:43:48.015215   32742 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc000cf8120}
	I0813 20:43:48.015331   32742 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:43:48.579240   32742 image.go:171] found k8s.gcr.io/etcd:3.4.3-0 locally: &{Image:0xc0012e81c0}
	I0813 20:43:48.579332   32742 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0813 20:43:50.677661   32742 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0: (2.662306094s)
	I0813 20:43:50.677731   32742 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.17.3: (2.752072351s)
	I0813 20:43:50.677731   32742 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0: (2.098377085s)
	I0813 20:43:50.677755   32742 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 from cache
	I0813 20:43:50.677795   32742 crio.go:191] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.3
	I0813 20:43:50.677849   32742 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.17.3
	I0813 20:43:53.428348   32742 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.17.3: (2.750478143s)
	I0813 20:43:53.428382   32742 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 from cache
	I0813 20:43:53.428412   32742 crio.go:191] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0813 20:43:53.428467   32742 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0813 20:43:58.292376   32742 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.17.3: (4.863882718s)
	I0813 20:43:58.292410   32742 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 from cache
	I0813 20:43:58.292436   32742 crio.go:191] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0813 20:43:58.292485   32742 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0813 20:44:02.925414   32742 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.17.3: (4.632904576s)
	I0813 20:44:02.925439   32742 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 from cache
	I0813 20:44:02.925466   32742 cache_images.go:113] Successfully loaded all cached images
	I0813 20:44:02.925472   32742 cache_images.go:82] LoadImages completed in 17.427838242s
	I0813 20:44:02.925542   32742 ssh_runner.go:149] Run: crio config
	I0813 20:44:03.087765   32742 cni.go:93] Creating CNI manager for ""
	I0813 20:44:03.087791   32742 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:44:03.087801   32742 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:44:03.087814   32742 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.171 APIServerPort:8443 KubernetesVersion:v1.17.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20210813204102-30853 NodeName:test-preload-20210813204102-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.171 CgroupDriver:systemd Client
CAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:44:03.088035   32742 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "test-preload-20210813204102-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:44:03.088157   32742 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=test-preload-20210813204102-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.171 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210813204102-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:44:03.088226   32742 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.17.3
	I0813 20:44:03.096397   32742 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.3': No such file or directory
	
	Initiating transfer...
	I0813 20:44:03.096444   32742 ssh_runner.go:149] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.3
	I0813 20:44:03.103333   32742 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.17.3/kubeadm
	I0813 20:44:03.103333   32742 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.17.3/kubectl
	I0813 20:44:03.103333   32742 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.17.3/kubelet
	I0813 20:44:03.609906   32742 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubeadm
	I0813 20:44:03.614910   32742 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubeadm': No such file or directory
	I0813 20:44:03.614961   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.17.3/kubeadm --> /var/lib/minikube/binaries/v1.17.3/kubeadm (39346176 bytes)
	I0813 20:44:03.647330   32742 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubectl
	I0813 20:44:03.678461   32742 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubectl': No such file or directory
	I0813 20:44:03.678494   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.17.3/kubectl --> /var/lib/minikube/binaries/v1.17.3/kubectl (43499520 bytes)
	I0813 20:44:04.448635   32742 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:44:04.460964   32742 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:44:04.504095   32742 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubelet
	I0813 20:44:04.509788   32742 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubelet': No such file or directory
	I0813 20:44:04.509833   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.17.3/kubelet --> /var/lib/minikube/binaries/v1.17.3/kubelet (111584792 bytes)
	I0813 20:44:05.107428   32742 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:44:05.114646   32742 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (514 bytes)
	I0813 20:44:05.130717   32742 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:44:05.142646   32742 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2080 bytes)
	I0813 20:44:05.156541   32742 ssh_runner.go:149] Run: grep 192.168.39.171	control-plane.minikube.internal$ /etc/hosts
	I0813 20:44:05.160747   32742 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813204102-30853 for IP: 192.168.39.171
	I0813 20:44:05.160792   32742 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:44:05.160814   32742 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:44:05.160870   32742 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813204102-30853/client.key
	I0813 20:44:05.160896   32742 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813204102-30853/apiserver.key.0e433718
	I0813 20:44:05.160918   32742 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813204102-30853/proxy-client.key
	I0813 20:44:05.161049   32742 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 20:44:05.161100   32742 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 20:44:05.161115   32742 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:44:05.161153   32742 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:44:05.161190   32742 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:44:05.161236   32742 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:44:05.161294   32742 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:44:05.162807   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813204102-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:44:05.180312   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813204102-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:44:05.197068   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813204102-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:44:05.214572   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813204102-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:44:05.231510   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:44:05.248699   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:44:05.264741   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:44:05.283114   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:44:05.301215   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 20:44:05.318358   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:44:05.337163   32742 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 20:44:05.353941   32742 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:44:05.366475   32742 ssh_runner.go:149] Run: openssl version
	I0813 20:44:05.372904   32742 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 20:44:05.380921   32742 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 20:44:05.385866   32742 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:44:05.385916   32742 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 20:44:05.391739   32742 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 20:44:05.398582   32742 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 20:44:05.407486   32742 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 20:44:05.413938   32742 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:44:05.413993   32742 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 20:44:05.419837   32742 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:44:05.426423   32742 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:44:05.435449   32742 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:44:05.440256   32742 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:44:05.440298   32742 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:44:05.446206   32742 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:44:05.452641   32742 kubeadm.go:390] StartCluster: {Name:test-preload-20210813204102-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.1
7.3 ClusterName:test-preload-20210813204102-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:44:05.452711   32742 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:44:05.452743   32742 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:44:05.490209   32742 cri.go:76] found id: "13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74"
	I0813 20:44:05.490226   32742 cri.go:76] found id: "0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb"
	I0813 20:44:05.490231   32742 cri.go:76] found id: "5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e"
	I0813 20:44:05.490235   32742 cri.go:76] found id: "4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb"
	I0813 20:44:05.490238   32742 cri.go:76] found id: "89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1"
	I0813 20:44:05.490242   32742 cri.go:76] found id: "03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712"
	I0813 20:44:05.490245   32742 cri.go:76] found id: "69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390"
	I0813 20:44:05.490250   32742 cri.go:76] found id: ""
	I0813 20:44:05.490276   32742 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:44:05.529146   32742 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712","pid":3562,"status":"running","bundle":"/run/containers/storage/overlay-containers/03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712/userdata","rootfs":"/var/lib/containers/storage/overlay/c93159f9f83d5b8fe488567058f726601e23c125b1c837dcd5e3a63a2cc90221/merged","created":"2021-08-13T20:42:27.549107216Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"21ba938b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"21ba938b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:42:27.199323766Z","io.kubernetes.cri-o.Image":"0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.17.0","io.kubernetes.cri-o.ImageRef":"0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-test-preload-20210813204102-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"19a164dad53f9dd6a6588dce0ad72fad\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-test-preload-20210813204102-30853_19a164dad53f9dd6a6588dce0ad72fad/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":
"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c93159f9f83d5b8fe488567058f726601e23c125b1c837dcd5e3a63a2cc90221/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-test-preload-20210813204102-30853_kube-system_19a164dad53f9dd6a6588dce0ad72fad_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-test-preload-20210813204102-30853_kube-system_19a164dad53f9dd6a6588dce0ad72fad_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/19a164dad53f9dd6a6588dce0ad72fad/etc-hosts\
",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/19a164dad53f9dd6a6588dce0ad72fad/containers/kube-apiserver/7c1154d2\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-test-preload-20210813204102-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"19a164dad53f9dd6a6588dce0ad72fad","kubernetes.io/config.hash":"19a164dad53f9dd6a6588dce0ad72fad","kubernetes.io/config.seen":"2021-08-13T20:42:24.802657506Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":
"root"},{"ociVersion":"1.0.2-dev","id":"0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb","pid":4390,"status":"running","bundle":"/run/containers/storage/overlay-containers/0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb/userdata","rootfs":"/var/lib/containers/storage/overlay/9211e4079abcbfdd96dbe6a2fffc92f94b5d23d75552b9d663b1225a71823448/merged","created":"2021-08-13T20:42:54.984630064Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"67b8d45c","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"67b8d45c\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.termin
ationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:42:54.861731974Z","io.kubernetes.cri-o.Image":"7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.17.0","io.kubernetes.cri-o.ImageRef":"7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-487tx\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f1df66f0-7c13-4c31-ab6a-49d1396711ba\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-487tx_f1df66f0-7c13-4c31-ab6a-49d1396711ba/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9211e4079abcbfdd96dbe6a2f
ffc92f94b5d23d75552b9d663b1225a71823448/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-487tx_kube-system_f1df66f0-7c13-4c31-ab6a-49d1396711ba_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-487tx_kube-system_f1df66f0-7c13-4c31-ab6a-49d1396711ba_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f1df66f0-7c13-4c31-ab6a-49d1396711ba/etc-host
s\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f1df66f0-7c13-4c31-ab6a-49d1396711ba/containers/kube-proxy/3b7955eb\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/f1df66f0-7c13-4c31-ab6a-49d1396711ba/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/f1df66f0-7c13-4c31-ab6a-49d1396711ba/volumes/kubernetes.io~secret/kube-proxy-token-jr9mj\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-487tx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f1df66f0-7c13-4c31-ab6a-49d1396711ba","kubernetes.io/config.seen":"2021-08-13T20:42:52.507975655Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"roo
t"},{"ociVersion":"1.0.2-dev","id":"13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74","pid":4521,"status":"running","bundle":"/run/containers/storage/overlay-containers/13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74/userdata","rootfs":"/var/lib/containers/storage/overlay/e58abb7767557523ffbb9a2fe30cc7841b5042e73b137918769b80d91950e30f/merged","created":"2021-08-13T20:42:55.821188744Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"4e50075b","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"4e50075b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.t
erminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:42:55.704044755Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"57d1deb0-cf54-4d69-bfc3-be8dc66981e8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_57d1deb0-cf54-4d69-bfc3-be8dc66981e8/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/c
ontainers/storage/overlay/e58abb7767557523ffbb9a2fe30cc7841b5042e73b137918769b80d91950e30f/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_57d1deb0-cf54-4d69-bfc3-be8dc66981e8_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_57d1deb0-cf54-4d69-bfc3-be8dc66981e8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/57d1deb0-cf54-4d69-bfc3-be8dc66981e8/etc-hosts\",\"readonly\":false},{\"container_path\":\"/
dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/57d1deb0-cf54-4d69-bfc3-be8dc66981e8/containers/storage-provisioner/b5f64057\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/57d1deb0-cf54-4d69-bfc3-be8dc66981e8/volumes/kubernetes.io~secret/storage-provisioner-token-5h8dk\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"57d1deb0-cf54-4d69-bfc3-be8dc66981e8","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPol
icy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:42:54.733999851Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5","pid":4442,"status":"running","bundle":"/run/containers/storage/overlay-containers/16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5/userdata","rootfs":"/var/lib/containers/storage/overlay/9ec0e420b548af70fec05e7d91d6649a8f007c417956658d30109e3e1e460878/merged","created":"2021-08-13T20:42:55.285341558Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisi
oner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":
\"2021-08-13T20:42:54.733999851Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod57d1deb0_cf54_4d69_bfc3_be8dc66981e8.slice","io.kubernetes.cri-o.ContainerID":"16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_57d1deb0-cf54-4d69-bfc3-be8dc66981e8_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:42:55.112593071Z","io.kubernetes.cri-o.HostName":"test-preload-20210813204102-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"57d1deb0-cf54-4d69-bfc3-be8dc66981e8\",\"io.kubernetes.pod.namespace\":\"kube-s
ystem\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"integration-test\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_57d1deb0-cf54-4d69-bfc3-be8dc66981e8/16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"57d1deb0-cf54-4d69-bfc3-be8dc66981e8\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9ec0e420b548af70fec05e7d91d6649a8f007c417956658d30109e3e1e460878/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_57d1deb0-cf54-4d69-bfc3-be8dc66981e8_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/contai
ners/storage/overlay-containers/16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"57d1deb0-cf54-4d69-bfc3-be8dc66981e8","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisio
ner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:42:54.733999851Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03","pid":3489,"status":"running","bundle":"/run/containers/storage/overlay-containers/21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03/userdata","rootfs":"/var/lib/containers/storage/overlay/a751cbf60b2c8f3e9039765a098aa2074a245073393e4eab6d7d8327b02ca2be/merged","created":"2021-08-13T20:42:27.128623261Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotat
ions":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:42:24.802651456Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"10aaacd07976c51fa87fbefd9b86418c\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod10aaacd07976c51fa87fbefd9b86418c.slice","io.kubernetes.cri-o.ContainerID":"21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-test-preload-20210813204102-30853_kube-system_10aaacd07976c51fa87fbefd9b86418c_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:42:26.63682626Z","io.kubernetes.cri-o.HostName":"test-preload-20210813204102-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-test-preload-20210813204102-30853","io
.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"etcd-test-preload-20210813204102-30853\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.pod.uid\":\"10aaacd07976c51fa87fbefd9b86418c\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-test-preload-20210813204102-30853_10aaacd07976c51fa87fbefd9b86418c/21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-test-preload-20210813204102-30853\",\"uid\":\"10aaacd07976c51fa87fbefd9b86418c\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a751cbf60b2c8f3e9039765a098aa2074a245073393e4eab6d7d8327b02ca2be/merged","io.kubernetes.cri-o.Name":"k8s_etcd-test-preload-20210813204102-30853_kube-system_10aaacd07976c51fa87fbefd9b86418c_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":
2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03/userdata/shm","io.kubernetes.pod.name":"etcd-test-preload-20210813204102-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"10aaacd07976c51fa87fbefd9b86418c","kubernetes.io/config.hash":"10aaacd07976c51fa87fbefd9b86418c","kubernetes.io/config.seen":"2021-08-13T20:42:24.802651456Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"ow
ner":"root"},{"ociVersion":"1.0.2-dev","id":"4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb","pid":3700,"status":"running","bundle":"/run/containers/storage/overlay-containers/4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb/userdata","rootfs":"/var/lib/containers/storage/overlay/1ca34895889ca160e2c6d169698e9eb1e93d0564d01d496308eb9e2b20e058f8/merged","created":"2021-08-13T20:42:29.178415096Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b066e24e","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b066e24e\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.termina
tionGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:42:28.405670588Z","io.kubernetes.cri-o.Image":"303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.3-0","io.kubernetes.cri-o.ImageRef":"303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-test-preload-20210813204102-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"10aaacd07976c51fa87fbefd9b86418c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-test-preload-20210813204102-30853_10aaacd07976c51fa87fbefd9b86418c/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1ca34895889ca1
60e2c6d169698e9eb1e93d0564d01d496308eb9e2b20e058f8/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-test-preload-20210813204102-30853_kube-system_10aaacd07976c51fa87fbefd9b86418c_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03","io.kubernetes.cri-o.SandboxName":"k8s_etcd-test-preload-20210813204102-30853_kube-system_10aaacd07976c51fa87fbefd9b86418c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/10aaacd07976c51fa87fbefd9b86418c/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/10aaacd07976c51fa87fbefd9b86418c/containe
rs/etcd/58e89ebc\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-test-preload-20210813204102-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"10aaacd07976c51fa87fbefd9b86418c","kubernetes.io/config.hash":"10aaacd07976c51fa87fbefd9b86418c","kubernetes.io/config.seen":"2021-08-13T20:42:24.802651456Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928","pid":3479,"status":"running","bundle":"/run/containers/storage/overlay-containers/574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928
/userdata","rootfs":"/var/lib/containers/storage/overlay/7092f5dca11c61f2576c0eb0a8925c036636d94ce4521345c8d9d7a9ab39dc11/merged","created":"2021-08-13T20:42:26.922983339Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"kubernetes.io/config.seen\":\"2021-08-13T20:42:24.802662662Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podbb577061a17ad23cfbbf52e9419bf32a.slice","io.kubernetes.cri-o.ContainerID":"574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-test-preload-20210813204102-30853_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:42:26.617482863Z","io.kubernetes.cri-o.HostName":"test-preload-20210813204102-30853","io.kuberne
tes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-test-preload-20210813204102-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-test-preload-20210813204102-30853\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-test-preload-20210813204102-30853_bb577061a17ad23cfbbf52e9419bf32a/574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-test-preload-20210813204102-30853\",\"uid\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"namespace\":\"kube
-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7092f5dca11c61f2576c0eb0a8925c036636d94ce4521345c8d9d7a9ab39dc11/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-test-preload-20210813204102-30853_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928/userdata/shm","io.kubernetes.pod.name":"kube-s
cheduler-test-preload-20210813204102-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.hash":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.seen":"2021-08-13T20:42:24.802662662Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649","pid":3476,"status":"running","bundle":"/run/containers/storage/overlay-containers/5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649/userdata","rootfs":"/var/lib/containers/storage/overlay/377cb045b93e7edcd3b20cfc4a5cedaea68f05b050824f7e7a82cebfee315dbc/merged","created":"2021-08-13T20:42:26.911780386Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"19a164dad53f9dd6a658
8dce0ad72fad\",\"kubernetes.io/config.seen\":\"2021-08-13T20:42:24.802657506Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod19a164dad53f9dd6a6588dce0ad72fad.slice","io.kubernetes.cri-o.ContainerID":"5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-test-preload-20210813204102-30853_kube-system_19a164dad53f9dd6a6588dce0ad72fad_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:42:26.650009166Z","io.kubernetes.cri-o.HostName":"test-preload-20210813204102-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-test-preload-20210813204102-30853","io.kubernetes.cri-o.Labels":"{\"tier\":\"c
ontrol-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"19a164dad53f9dd6a6588dce0ad72fad\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-test-preload-20210813204102-30853\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-test-preload-20210813204102-30853_19a164dad53f9dd6a6588dce0ad72fad/5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-test-preload-20210813204102-30853\",\"uid\":\"19a164dad53f9dd6a6588dce0ad72fad\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/377cb045b93e7edcd3b20cfc4a5cedaea68f05b050824f7e7a82cebfee315dbc/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-test-preload-20210813204102-30853_kube-system_19a164dad53f9dd6a6588dce0ad72fad_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"
network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-test-preload-20210813204102-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"19a164dad53f9dd6a6588dce0ad72fad","kubernetes.io/config.hash":"19a164dad53f9dd6a6588dce0ad72fad","kubernetes.io/config.seen":"2021-08-13T20:42:24.802657506Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":
"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e","pid":4258,"status":"running","bundle":"/run/containers/storage/overlay-containers/5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e/userdata","rootfs":"/var/lib/containers/storage/overlay/aa5377706458941bef3cc87190b5f84fdc08e5130a7535f53cb20f7aa45e2b41/merged","created":"2021-08-13T20:42:53.603213589Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2699146b","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.k
ubernetes.container.hash\":\"2699146b\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:42:53.462035Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns:1.6.5","io.kubernetes.cri-o.ImageRef":"70f311871ae12c14b
d0e02028f249f933f925e4370744e4e35f706da773a8f61","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-6955765f44-dxl54\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"73136e6c-4b55-4e8d-939f-f04181286524\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6955765f44-dxl54_73136e6c-4b55-4e8d-939f-f04181286524/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/aa5377706458941bef3cc87190b5f84fdc08e5130a7535f53cb20f7aa45e2b41/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-6955765f44-dxl54_kube-system_73136e6c-4b55-4e8d-939f-f04181286524_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227","io.kuber
netes.cri-o.SandboxName":"k8s_coredns-6955765f44-dxl54_kube-system_73136e6c-4b55-4e8d-939f-f04181286524_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/73136e6c-4b55-4e8d-939f-f04181286524/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/73136e6c-4b55-4e8d-939f-f04181286524/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/73136e6c-4b55-4e8d-939f-f04181286524/containers/coredns/3f36d465\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/73136e6c-4b55-4e8d-939f-f04181286524/volumes/kubernetes.io~secret/coredns-token-76k85\",\"readonly\":true}]","io.kubernetes.pod.name
":"coredns-6955765f44-dxl54","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"73136e6c-4b55-4e8d-939f-f04181286524","kubernetes.io/config.seen":"2021-08-13T20:42:52.51147394Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390","pid":3555,"status":"running","bundle":"/run/containers/storage/overlay-containers/69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390/userdata","rootfs":"/var/lib/containers/storage/overlay/a552f71ccdac365ea54e8f53bfe5c25b25b7d08ece01253f97d1c922f3cf0631/merged","created":"2021-08-13T20:42:27.523777369Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"99930feb","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.
container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"99930feb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:42:27.34461654Z","io.kubernetes.cri-o.Image":"78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.17.0","io.kubernetes.cri-o.ImageRef":"78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube
-scheduler-test-preload-20210813204102-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"bb577061a17ad23cfbbf52e9419bf32a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-test-preload-20210813204102-30853_bb577061a17ad23cfbbf52e9419bf32a/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a552f71ccdac365ea54e8f53bfe5c25b25b7d08ece01253f97d1c922f3cf0631/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-test-preload-20210813204102-30853_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-test-preload-20210813204102-3085
3_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/bb577061a17ad23cfbbf52e9419bf32a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/bb577061a17ad23cfbbf52e9419bf32a/containers/kube-scheduler/bb242619\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-test-preload-20210813204102-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.hash":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.seen":"2021-08-13T20:42:24.802662662Z","
kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1","pid":3638,"status":"running","bundle":"/run/containers/storage/overlay-containers/89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1/userdata","rootfs":"/var/lib/containers/storage/overlay/0cfef140ffe2bb5b3cbe6badad4ec35c8fe1fa919dfeb8241a40549578ba86b6/merged","created":"2021-08-13T20:42:27.928352106Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"589bcd22","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"589bcd22\",\"io.kubernetes.container.restartCount
\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:42:27.783468549Z","io.kubernetes.cri-o.Image":"5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.17.0","io.kubernetes.cri-o.ImageRef":"5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-test-preload-20210813204102-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"603b914543a305bf066dc8de01ce2232\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/ku
be-system_kube-controller-manager-test-preload-20210813204102-30853_603b914543a305bf066dc8de01ce2232/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0cfef140ffe2bb5b3cbe6badad4ec35c8fe1fa919dfeb8241a40549578ba86b6/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-test-preload-20210813204102-30853_kube-system_603b914543a305bf066dc8de01ce2232_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-test-preload-20210813204102-30853_kube-system_603b914543a305bf066dc8de01ce2232_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.St
dinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/603b914543a305bf066dc8de01ce2232/containers/kube-controller-manager/193bc15b\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/603b914543a305bf066dc8de01ce2232/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}
]","io.kubernetes.pod.name":"kube-controller-manager-test-preload-20210813204102-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"603b914543a305bf066dc8de01ce2232","kubernetes.io/config.hash":"603b914543a305bf066dc8de01ce2232","kubernetes.io/config.seen":"2021-08-13T20:42:24.802660394Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f","pid":3496,"status":"running","bundle":"/run/containers/storage/overlay-containers/b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f/userdata","rootfs":"/var/lib/containers/storage/overlay/ef47854a4945ca5b173c276eb8416100c94f873e6b380c7fde011f24f2060484/merged","created":"2021-08-13T20:42:27.034639679Z","annotations":{"component":"kube-controller-manager","io.container.manager
":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"603b914543a305bf066dc8de01ce2232\",\"kubernetes.io/config.seen\":\"2021-08-13T20:42:24.802660394Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod603b914543a305bf066dc8de01ce2232.slice","io.kubernetes.cri-o.ContainerID":"b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-test-preload-20210813204102-30853_kube-system_603b914543a305bf066dc8de01ce2232_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:42:26.645483597Z","io.kubernetes.cri-o.HostName":"test-preload-20210813204102-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s
.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-test-preload-20210813204102-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-controller-manager-test-preload-20210813204102-30853\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"603b914543a305bf066dc8de01ce2232\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-test-preload-20210813204102-30853_603b914543a305bf066dc8de01ce2232/b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-test-preload-20210813204102-30853\",\"uid\":\"603b914543a305bf066dc8de01ce2232\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ef47854a4945ca5b173c276eb8416100c94f873e6b380c7fde011f24f2060484/merged","io.kubernetes.cri-o.Name":"k
8s_kube-controller-manager-test-preload-20210813204102-30853_kube-system_603b914543a305bf066dc8de01ce2232_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-test-preload-20210813204102-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"603b914543a305bf066dc8de01ce2232","kubernetes.io/conf
ig.hash":"603b914543a305bf066dc8de01ce2232","kubernetes.io/config.seen":"2021-08-13T20:42:24.802660394Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7","pid":4336,"status":"running","bundle":"/run/containers/storage/overlay-containers/c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7/userdata","rootfs":"/var/lib/containers/storage/overlay/e8f16392582546cf665f4107dd1a2fa4733d973fc5ec57efb08091b771bf5d92/merged","created":"2021-08-13T20:42:54.077493394Z","annotations":{"controller-revision-hash":"68bd87b66","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:42:52.507975655Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-podf1df66f0_7c13_4c31_ab6a_49d1396711ba.slice
","io.kubernetes.cri-o.ContainerID":"c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-487tx_kube-system_f1df66f0-7c13-4c31-ab6a-49d1396711ba_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:42:53.85552352Z","io.kubernetes.cri-o.HostName":"test-preload-20210813204102-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-487tx","io.kubernetes.cri-o.Labels":"{\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"68bd87b66\",\"io.kubernetes.pod.uid\":\"f1df66f0-7c13-4c31-ab6a-49d1396711ba\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kub
e-proxy-487tx\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-487tx_f1df66f0-7c13-4c31-ab6a-49d1396711ba/c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-487tx\",\"uid\":\"f1df66f0-7c13-4c31-ab6a-49d1396711ba\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e8f16392582546cf665f4107dd1a2fa4733d973fc5ec57efb08091b771bf5d92/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-487tx_kube-system_f1df66f0-7c13-4c31-ab6a-49d1396711ba_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxI
D":"c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7/userdata/shm","io.kubernetes.pod.name":"kube-proxy-487tx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"f1df66f0-7c13-4c31-ab6a-49d1396711ba","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:42:52.507975655Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227","pid":4196,"status":"running","bundle":"/run/containers/storage/overlay-containers/e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227/userdata","rootfs":"/var/lib/containers/storage/overlay/d2bc29db289efd26798c42003e96b7b30dfa284f9ca8d8749e97c15d8f0390e6/merged","c
reated":"2021-08-13T20:42:53.296851932Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:42:52.51147394Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"92:61:9e:84:c2:49\"},{\"name\":\"vethfeccd7b1\",\"mac\":\"56:46:8f:a0:4b:28\"},{\"name\":\"eth0\",\"mac\":\"d6:8a:c4:e1:27:3b\",\"sandbox\":\"/var/run/netns/1f63127d-c60d-420c-8bdf-96965fcc6376\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod73136e6c_4b55_4e8d_939f_f04181286524.slice","io.kubernetes.cri-o.ContainerID":"e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-6955765f44-dxl54_kube-s
ystem_73136e6c-4b55-4e8d-939f-f04181286524_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:42:52.963346732Z","io.kubernetes.cri-o.HostName":"coredns-6955765f44-dxl54","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-6955765f44-dxl54","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-6955765f44-dxl54\",\"pod-template-hash\":\"6955765f44\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"73136e6c-4b55-4e8d-939f-f04181286524\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6955765f44-dxl54_73136e6c-4b55-4e8d-939f-f04181286524/e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227.lo
g","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-6955765f44-dxl54\",\"uid\":\"73136e6c-4b55-4e8d-939f-f04181286524\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d2bc29db289efd26798c42003e96b7b30dfa284f9ca8d8749e97c15d8f0390e6/merged","io.kubernetes.cri-o.Name":"k8s_coredns-6955765f44-dxl54_kube-system_73136e6c-4b55-4e8d-939f-f04181286524_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e
7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227/userdata/shm","io.kubernetes.pod.name":"coredns-6955765f44-dxl54","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"73136e6c-4b55-4e8d-939f-f04181286524","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:42:52.51147394Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"6955765f44"},"owner":"root"}]
	I0813 20:44:05.529803   32742 cri.go:113] list returned 14 containers
	I0813 20:44:05.529816   32742 cri.go:116] container: {ID:03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712 Status:running}
	I0813 20:44:05.529827   32742 cri.go:122] skipping {03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712 running}: state = "running", want "paused"
	I0813 20:44:05.529836   32742 cri.go:116] container: {ID:0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb Status:running}
	I0813 20:44:05.529844   32742 cri.go:122] skipping {0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb running}: state = "running", want "paused"
	I0813 20:44:05.529851   32742 cri.go:116] container: {ID:13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74 Status:running}
	I0813 20:44:05.529856   32742 cri.go:122] skipping {13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74 running}: state = "running", want "paused"
	I0813 20:44:05.529860   32742 cri.go:116] container: {ID:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5 Status:running}
	I0813 20:44:05.529864   32742 cri.go:118] skipping 16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5 - not in ps
	I0813 20:44:05.529870   32742 cri.go:116] container: {ID:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03 Status:running}
	I0813 20:44:05.529874   32742 cri.go:118] skipping 21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03 - not in ps
	I0813 20:44:05.529877   32742 cri.go:116] container: {ID:4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb Status:running}
	I0813 20:44:05.529881   32742 cri.go:122] skipping {4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb running}: state = "running", want "paused"
	I0813 20:44:05.529885   32742 cri.go:116] container: {ID:574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928 Status:running}
	I0813 20:44:05.529891   32742 cri.go:118] skipping 574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928 - not in ps
	I0813 20:44:05.529896   32742 cri.go:116] container: {ID:5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649 Status:running}
	I0813 20:44:05.529900   32742 cri.go:118] skipping 5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649 - not in ps
	I0813 20:44:05.529903   32742 cri.go:116] container: {ID:5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e Status:running}
	I0813 20:44:05.529907   32742 cri.go:122] skipping {5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e running}: state = "running", want "paused"
	I0813 20:44:05.529911   32742 cri.go:116] container: {ID:69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390 Status:running}
	I0813 20:44:05.529916   32742 cri.go:122] skipping {69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390 running}: state = "running", want "paused"
	I0813 20:44:05.529921   32742 cri.go:116] container: {ID:89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1 Status:running}
	I0813 20:44:05.529925   32742 cri.go:122] skipping {89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1 running}: state = "running", want "paused"
	I0813 20:44:05.529932   32742 cri.go:116] container: {ID:b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f Status:running}
	I0813 20:44:05.529938   32742 cri.go:118] skipping b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f - not in ps
	I0813 20:44:05.529947   32742 cri.go:116] container: {ID:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7 Status:running}
	I0813 20:44:05.529958   32742 cri.go:118] skipping c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7 - not in ps
	I0813 20:44:05.529964   32742 cri.go:116] container: {ID:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227 Status:running}
	I0813 20:44:05.529974   32742 cri.go:118] skipping e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227 - not in ps
	I0813 20:44:05.530011   32742 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:44:05.537244   32742 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:44:05.537269   32742 kubeadm.go:600] restartCluster start
	I0813 20:44:05.537306   32742 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:44:05.544381   32742 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:44:05.545099   32742 kubeconfig.go:93] found "test-preload-20210813204102-30853" server: "https://192.168.39.171:8443"
	I0813 20:44:05.545507   32742 kapi.go:59] client config for test-preload-20210813204102-30853: &rest.Config{Host:"https://192.168.39.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813204102-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813
204102-30853/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:44:05.547223   32742 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:44:05.554332   32742 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -40,7 +40,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.17.0
	+kubernetesVersion: v1.17.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0813 20:44:05.554363   32742 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:44:05.554377   32742 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:44:05.554422   32742 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:44:05.592946   32742 cri.go:76] found id: "13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74"
	I0813 20:44:05.592969   32742 cri.go:76] found id: "0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb"
	I0813 20:44:05.592975   32742 cri.go:76] found id: "5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e"
	I0813 20:44:05.592980   32742 cri.go:76] found id: "4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb"
	I0813 20:44:05.592985   32742 cri.go:76] found id: "89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1"
	I0813 20:44:05.592990   32742 cri.go:76] found id: "03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712"
	I0813 20:44:05.592996   32742 cri.go:76] found id: "69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390"
	I0813 20:44:05.593001   32742 cri.go:76] found id: ""
	I0813 20:44:05.593008   32742 cri.go:221] Stopping containers: [13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74 0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb 5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e 4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb 89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1 03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712 69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390]
	I0813 20:44:05.593062   32742 ssh_runner.go:149] Run: which crictl
	I0813 20:44:05.597408   32742 ssh_runner.go:149] Run: sudo /bin/crictl stop 13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74 0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb 5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e 4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb 89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1 03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712 69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390
	I0813 20:44:07.586316   32742 ssh_runner.go:189] Completed: sudo /bin/crictl stop 13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74 0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb 5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e 4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb 89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1 03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712 69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390: (1.988847255s)
	I0813 20:44:07.586397   32742 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:44:07.599615   32742 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:44:07.606750   32742 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5615 Aug 13 20:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5651 Aug 13 20:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2071 Aug 13 20:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5595 Aug 13 20:42 /etc/kubernetes/scheduler.conf
	
	I0813 20:44:07.606801   32742 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 20:44:07.613180   32742 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 20:44:07.619739   32742 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:44:07.625818   32742 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:44:07.632152   32742 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:44:07.638715   32742 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:44:07.638730   32742 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:44:07.706828   32742 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:44:08.561564   32742 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:44:08.850637   32742 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:44:08.942158   32742 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:44:09.070829   32742 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:44:09.070913   32742 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:44:09.582758   32742 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:44:10.081944   32742 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:44:10.582579   32742 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:44:11.082281   32742 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:44:11.093789   32742 api_server.go:70] duration metric: took 2.022962366s to wait for apiserver process to appear ...
	I0813 20:44:11.093813   32742 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:44:11.093823   32742 api_server.go:239] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0813 20:44:15.290816   32742 api_server.go:265] https://192.168.39.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 20:44:15.290871   32742 api_server.go:101] status: https://192.168.39.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:44:15.791364   32742 api_server.go:239] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0813 20:44:15.825141   32742 api_server.go:265] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:44:15.825168   32742 api_server.go:101] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:44:16.291775   32742 api_server.go:239] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0813 20:44:16.299096   32742 api_server.go:265] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:44:16.299123   32742 api_server.go:101] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:44:16.791786   32742 api_server.go:239] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0813 20:44:16.799222   32742 api_server.go:265] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0813 20:44:16.806383   32742 api_server.go:139] control plane version: v1.17.3
	I0813 20:44:16.806408   32742 api_server.go:129] duration metric: took 5.71258899s to wait for apiserver health ...
	I0813 20:44:16.806420   32742 cni.go:93] Creating CNI manager for ""
	I0813 20:44:16.806429   32742 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:44:17.377525   32742 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 20:44:17.377621   32742 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 20:44:17.430203   32742 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 20:44:17.449951   32742 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:44:17.463794   32742 system_pods.go:59] 7 kube-system pods found
	I0813 20:44:17.463826   32742 system_pods.go:61] "coredns-6955765f44-dxl54" [73136e6c-4b55-4e8d-939f-f04181286524] Running
	I0813 20:44:17.463835   32742 system_pods.go:61] "etcd-test-preload-20210813204102-30853" [c09f21bb-529c-43bd-b2b0-0e641b5e72ad] Running
	I0813 20:44:17.463842   32742 system_pods.go:61] "kube-apiserver-test-preload-20210813204102-30853" [3e7cba98-ea81-4c9d-9c8c-3af5f520462f] Pending
	I0813 20:44:17.463849   32742 system_pods.go:61] "kube-controller-manager-test-preload-20210813204102-30853" [b24c2dbb-2867-4563-b18c-05269abc521f] Running
	I0813 20:44:17.463855   32742 system_pods.go:61] "kube-proxy-487tx" [f1df66f0-7c13-4c31-ab6a-49d1396711ba] Running
	I0813 20:44:17.463862   32742 system_pods.go:61] "kube-scheduler-test-preload-20210813204102-30853" [ba7155d7-e193-44c2-83db-8462bbf8da71] Pending
	I0813 20:44:17.463866   32742 system_pods.go:61] "storage-provisioner" [57d1deb0-cf54-4d69-bfc3-be8dc66981e8] Running
	I0813 20:44:17.463871   32742 system_pods.go:74] duration metric: took 13.899035ms to wait for pod list to return data ...
	I0813 20:44:17.463881   32742 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:44:17.467851   32742 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:44:17.467879   32742 node_conditions.go:123] node cpu capacity is 2
	I0813 20:44:17.467892   32742 node_conditions.go:105] duration metric: took 4.005788ms to run NodePressure ...
	I0813 20:44:17.467912   32742 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:44:17.939450   32742 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 20:44:17.942952   32742 kubeadm.go:746] kubelet initialised
	I0813 20:44:17.942970   32742 kubeadm.go:747] duration metric: took 3.498608ms waiting for restarted kubelet to initialise ...
	I0813 20:44:17.942977   32742 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:44:17.946191   32742 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6955765f44-dxl54" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:17.955201   32742 pod_ready.go:92] pod "coredns-6955765f44-dxl54" in "kube-system" namespace has status "Ready":"True"
	I0813 20:44:17.955219   32742 pod_ready.go:81] duration metric: took 9.005032ms waiting for pod "coredns-6955765f44-dxl54" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:17.955227   32742 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:17.960156   32742 pod_ready.go:92] pod "etcd-test-preload-20210813204102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:44:17.960175   32742 pod_ready.go:81] duration metric: took 4.941681ms waiting for pod "etcd-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:17.960185   32742 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:17.966323   32742 pod_ready.go:92] pod "kube-apiserver-test-preload-20210813204102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:44:17.966341   32742 pod_ready.go:81] duration metric: took 6.147088ms waiting for pod "kube-apiserver-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:17.966352   32742 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:17.971500   32742 pod_ready.go:92] pod "kube-controller-manager-test-preload-20210813204102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:44:17.971519   32742 pod_ready.go:81] duration metric: took 5.158455ms waiting for pod "kube-controller-manager-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:17.971527   32742 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-487tx" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:18.354682   32742 pod_ready.go:92] pod "kube-proxy-487tx" in "kube-system" namespace has status "Ready":"True"
	I0813 20:44:18.354702   32742 pod_ready.go:81] duration metric: took 383.168565ms waiting for pod "kube-proxy-487tx" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:18.354719   32742 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:18.742839   32742 pod_ready.go:92] pod "kube-scheduler-test-preload-20210813204102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:44:18.742882   32742 pod_ready.go:81] duration metric: took 388.154055ms waiting for pod "kube-scheduler-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:18.742898   32742 pod_ready.go:38] duration metric: took 799.908664ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:44:18.742947   32742 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:44:18.754931   32742 ops.go:34] apiserver oom_adj: -16
	I0813 20:44:18.754952   32742 kubeadm.go:604] restartCluster took 13.217676716s
	I0813 20:44:18.754971   32742 kubeadm.go:392] StartCluster complete in 13.302324991s
	I0813 20:44:18.754989   32742 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:44:18.755102   32742 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:44:18.756013   32742 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:44:18.756656   32742 kapi.go:59] client config for test-preload-20210813204102-30853: &rest.Config{Host:"https://192.168.39.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813204102-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813
204102-30853/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:44:19.273190   32742 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "test-preload-20210813204102-30853" rescaled to 1
	I0813 20:44:19.273254   32742 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}
	I0813 20:44:19.275120   32742 out.go:177] * Verifying Kubernetes components...
	I0813 20:44:19.273327   32742 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:44:19.273349   32742 addons.go:342] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0813 20:44:19.273519   32742 config.go:177] Loaded profile config "test-preload-20210813204102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.3
	I0813 20:44:19.275230   32742 addons.go:59] Setting storage-provisioner=true in profile "test-preload-20210813204102-30853"
	I0813 20:44:19.275238   32742 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:44:19.275253   32742 addons.go:135] Setting addon storage-provisioner=true in "test-preload-20210813204102-30853"
	W0813 20:44:19.275265   32742 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:44:19.275239   32742 addons.go:59] Setting default-storageclass=true in profile "test-preload-20210813204102-30853"
	I0813 20:44:19.275313   32742 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-20210813204102-30853"
	I0813 20:44:19.275295   32742 host.go:66] Checking if "test-preload-20210813204102-30853" exists ...
	I0813 20:44:19.275693   32742 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:44:19.275726   32742 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:44:19.275877   32742 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:44:19.275922   32742 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:44:19.290088   32742 node_ready.go:35] waiting up to 6m0s for node "test-preload-20210813204102-30853" to be "Ready" ...
	I0813 20:44:19.293384   32742 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35149
	I0813 20:44:19.293458   32742 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0813 20:44:19.293851   32742 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:44:19.293919   32742 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:44:19.294313   32742 main.go:130] libmachine: Using API Version  1
	I0813 20:44:19.294329   32742 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:44:19.294479   32742 main.go:130] libmachine: Using API Version  1
	I0813 20:44:19.294503   32742 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:44:19.294608   32742 node_ready.go:49] node "test-preload-20210813204102-30853" has status "Ready":"True"
	I0813 20:44:19.294626   32742 node_ready.go:38] duration metric: took 4.511508ms waiting for node "test-preload-20210813204102-30853" to be "Ready" ...
	I0813 20:44:19.294637   32742 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:44:19.294747   32742 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:44:19.294843   32742 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:44:19.294939   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetState
	I0813 20:44:19.295433   32742 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:44:19.295483   32742 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:44:19.299075   32742 kapi.go:59] client config for test-preload-20210813204102-30853: &rest.Config{Host:"https://192.168.39.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813204102-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813
204102-30853/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:44:19.301040   32742 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6955765f44-dxl54" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:19.307353   32742 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39789
	I0813 20:44:19.307768   32742 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:44:19.308202   32742 main.go:130] libmachine: Using API Version  1
	I0813 20:44:19.308229   32742 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:44:19.308242   32742 addons.go:135] Setting addon default-storageclass=true in "test-preload-20210813204102-30853"
	W0813 20:44:19.308266   32742 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:44:19.308297   32742 host.go:66] Checking if "test-preload-20210813204102-30853" exists ...
	I0813 20:44:19.308554   32742 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:44:19.308727   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetState
	I0813 20:44:19.308735   32742 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:44:19.308779   32742 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:44:19.311978   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .DriverName
	I0813 20:44:19.314170   32742 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:44:19.314280   32742 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:44:19.314337   32742 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:44:19.314363   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHHostname
	I0813 20:44:19.320190   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:44:19.320615   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:74:73", ip: ""} in network mk-test-preload-20210813204102-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:41:20 +0000 UTC Type:0 Mac:52:54:00:53:74:73 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-20210813204102-30853 Clientid:01:52:54:00:53:74:73}
	I0813 20:44:19.320645   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined IP address 192.168.39.171 and MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:44:19.320778   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHPort
	I0813 20:44:19.320933   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHKeyPath
	I0813 20:44:19.320997   32742 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35103
	I0813 20:44:19.321075   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHUsername
	I0813 20:44:19.321237   32742 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/test-preload-20210813204102-30853/id_rsa Username:docker}
	I0813 20:44:19.321406   32742 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:44:19.321861   32742 main.go:130] libmachine: Using API Version  1
	I0813 20:44:19.321891   32742 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:44:19.322225   32742 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:44:19.322677   32742 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:44:19.322717   32742 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:44:19.334196   32742 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38787
	I0813 20:44:19.334577   32742 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:44:19.335064   32742 main.go:130] libmachine: Using API Version  1
	I0813 20:44:19.335091   32742 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:44:19.335444   32742 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:44:19.335652   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetState
	I0813 20:44:19.338676   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .DriverName
	I0813 20:44:19.338893   32742 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:44:19.338910   32742 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:44:19.338930   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHHostname
	I0813 20:44:19.343923   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:44:19.344319   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:74:73", ip: ""} in network mk-test-preload-20210813204102-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:41:20 +0000 UTC Type:0 Mac:52:54:00:53:74:73 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-20210813204102-30853 Clientid:01:52:54:00:53:74:73}
	I0813 20:44:19.344351   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | domain test-preload-20210813204102-30853 has defined IP address 192.168.39.171 and MAC address 52:54:00:53:74:73 in network mk-test-preload-20210813204102-30853
	I0813 20:44:19.344508   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHPort
	I0813 20:44:19.344745   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHKeyPath
	I0813 20:44:19.344911   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .GetSSHUsername
	I0813 20:44:19.345048   32742 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/test-preload-20210813204102-30853/id_rsa Username:docker}
	I0813 20:44:19.439145   32742 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:44:19.452899   32742 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:44:19.482180   32742 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:44:19.544285   32742 pod_ready.go:92] pod "coredns-6955765f44-dxl54" in "kube-system" namespace has status "Ready":"True"
	I0813 20:44:19.544304   32742 pod_ready.go:81] duration metric: took 243.245528ms waiting for pod "coredns-6955765f44-dxl54" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:19.544314   32742 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:19.877502   32742 main.go:130] libmachine: Making call to close driver server
	I0813 20:44:19.877531   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .Close
	I0813 20:44:19.877795   32742 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:44:19.877815   32742 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:44:19.877827   32742 main.go:130] libmachine: Making call to close driver server
	I0813 20:44:19.877836   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .Close
	I0813 20:44:19.878048   32742 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:44:19.878069   32742 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:44:19.878095   32742 main.go:130] libmachine: Making call to close driver server
	I0813 20:44:19.878103   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .Close
	I0813 20:44:19.878312   32742 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:44:19.878329   32742 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:44:19.878420   32742 main.go:130] libmachine: Making call to close driver server
	I0813 20:44:19.878435   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .Close
	I0813 20:44:19.878631   32742 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:44:19.878646   32742 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:44:19.878656   32742 main.go:130] libmachine: Making call to close driver server
	I0813 20:44:19.878666   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) Calling .Close
	I0813 20:44:19.878914   32742 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:44:19.878931   32742 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:44:19.878960   32742 main.go:130] libmachine: (test-preload-20210813204102-30853) DBG | Closing plugin on server side
	I0813 20:44:19.881542   32742 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0813 20:44:19.881562   32742 addons.go:344] enableAddons completed in 608.223238ms
	I0813 20:44:19.943991   32742 pod_ready.go:92] pod "etcd-test-preload-20210813204102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:44:19.944011   32742 pod_ready.go:81] duration metric: took 399.690791ms waiting for pod "etcd-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:19.944022   32742 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:20.343780   32742 pod_ready.go:92] pod "kube-apiserver-test-preload-20210813204102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:44:20.343808   32742 pod_ready.go:81] duration metric: took 399.778355ms waiting for pod "kube-apiserver-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:20.343822   32742 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:20.743878   32742 pod_ready.go:92] pod "kube-controller-manager-test-preload-20210813204102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:44:20.743901   32742 pod_ready.go:81] duration metric: took 400.070529ms waiting for pod "kube-controller-manager-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:20.743912   32742 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-487tx" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:21.143700   32742 pod_ready.go:92] pod "kube-proxy-487tx" in "kube-system" namespace has status "Ready":"True"
	I0813 20:44:21.143720   32742 pod_ready.go:81] duration metric: took 399.800637ms waiting for pod "kube-proxy-487tx" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:21.143731   32742 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:21.543536   32742 pod_ready.go:92] pod "kube-scheduler-test-preload-20210813204102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:44:21.543558   32742 pod_ready.go:81] duration metric: took 399.81914ms waiting for pod "kube-scheduler-test-preload-20210813204102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:44:21.543571   32742 pod_ready.go:38] duration metric: took 2.248922025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:44:21.543592   32742 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:44:21.543644   32742 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:44:21.560313   32742 api_server.go:70] duration metric: took 2.287027902s to wait for apiserver process to appear ...
	I0813 20:44:21.560337   32742 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:44:21.560347   32742 api_server.go:239] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0813 20:44:21.566547   32742 api_server.go:265] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0813 20:44:21.567533   32742 api_server.go:139] control plane version: v1.17.3
	I0813 20:44:21.567556   32742 api_server.go:129] duration metric: took 7.212151ms to wait for apiserver health ...
	I0813 20:44:21.567567   32742 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:44:21.745615   32742 system_pods.go:59] 7 kube-system pods found
	I0813 20:44:21.745641   32742 system_pods.go:61] "coredns-6955765f44-dxl54" [73136e6c-4b55-4e8d-939f-f04181286524] Running
	I0813 20:44:21.745646   32742 system_pods.go:61] "etcd-test-preload-20210813204102-30853" [c09f21bb-529c-43bd-b2b0-0e641b5e72ad] Running
	I0813 20:44:21.745650   32742 system_pods.go:61] "kube-apiserver-test-preload-20210813204102-30853" [3e7cba98-ea81-4c9d-9c8c-3af5f520462f] Running
	I0813 20:44:21.745654   32742 system_pods.go:61] "kube-controller-manager-test-preload-20210813204102-30853" [b24c2dbb-2867-4563-b18c-05269abc521f] Running
	I0813 20:44:21.745658   32742 system_pods.go:61] "kube-proxy-487tx" [f1df66f0-7c13-4c31-ab6a-49d1396711ba] Running
	I0813 20:44:21.745661   32742 system_pods.go:61] "kube-scheduler-test-preload-20210813204102-30853" [ba7155d7-e193-44c2-83db-8462bbf8da71] Running
	I0813 20:44:21.745665   32742 system_pods.go:61] "storage-provisioner" [57d1deb0-cf54-4d69-bfc3-be8dc66981e8] Running
	I0813 20:44:21.745670   32742 system_pods.go:74] duration metric: took 178.097871ms to wait for pod list to return data ...
	I0813 20:44:21.745677   32742 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:44:21.944416   32742 default_sa.go:45] found service account: "default"
	I0813 20:44:21.944440   32742 default_sa.go:55] duration metric: took 198.756829ms for default service account to be created ...
	I0813 20:44:21.944448   32742 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:44:22.146455   32742 system_pods.go:86] 7 kube-system pods found
	I0813 20:44:22.146489   32742 system_pods.go:89] "coredns-6955765f44-dxl54" [73136e6c-4b55-4e8d-939f-f04181286524] Running
	I0813 20:44:22.146506   32742 system_pods.go:89] "etcd-test-preload-20210813204102-30853" [c09f21bb-529c-43bd-b2b0-0e641b5e72ad] Running
	I0813 20:44:22.146513   32742 system_pods.go:89] "kube-apiserver-test-preload-20210813204102-30853" [3e7cba98-ea81-4c9d-9c8c-3af5f520462f] Running
	I0813 20:44:22.146519   32742 system_pods.go:89] "kube-controller-manager-test-preload-20210813204102-30853" [b24c2dbb-2867-4563-b18c-05269abc521f] Running
	I0813 20:44:22.146525   32742 system_pods.go:89] "kube-proxy-487tx" [f1df66f0-7c13-4c31-ab6a-49d1396711ba] Running
	I0813 20:44:22.146531   32742 system_pods.go:89] "kube-scheduler-test-preload-20210813204102-30853" [ba7155d7-e193-44c2-83db-8462bbf8da71] Running
	I0813 20:44:22.146539   32742 system_pods.go:89] "storage-provisioner" [57d1deb0-cf54-4d69-bfc3-be8dc66981e8] Running
	I0813 20:44:22.146547   32742 system_pods.go:126] duration metric: took 202.094273ms to wait for k8s-apps to be running ...
	I0813 20:44:22.146560   32742 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:44:22.146622   32742 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:44:22.159119   32742 system_svc.go:56] duration metric: took 12.554196ms WaitForService to wait for kubelet.
	I0813 20:44:22.159140   32742 kubeadm.go:547] duration metric: took 2.885858617s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:44:22.159161   32742 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:44:22.343067   32742 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:44:22.343095   32742 node_conditions.go:123] node cpu capacity is 2
	I0813 20:44:22.343108   32742 node_conditions.go:105] duration metric: took 183.941758ms to run NodePressure ...
	I0813 20:44:22.343120   32742 start.go:231] waiting for startup goroutines ...
	I0813 20:44:22.386188   32742 start.go:462] kubectl: 1.20.5, cluster: 1.17.3 (minor skew: 3)
	I0813 20:44:22.388144   32742 out.go:177] 
	W0813 20:44:22.388280   32742 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.17.3.
	I0813 20:44:22.389834   32742 out.go:177]   - Want kubectl v1.17.3? Try 'minikube kubectl -- get pods -A'
	I0813 20:44:22.391275   32742 out.go:177] * Done! kubectl is now configured to use "test-preload-20210813204102-30853" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:41:17 UTC, end at Fri 2021-08-13 20:44:23 UTC. --
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.205978492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0b9a66fa-61c3-4562-ab9d-3ea38840538e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.206249026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:764f2ed126e3a43ac48dcb89f104a8dcac5bbfccdb19d43f5fffe00917f83afc,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628887456054305027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95d1a6870f0a39dbd231d57d2c3ca123a182f7f0a6c8e350abf9ebd0ad3697b,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628887455962159063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882326f4f17f8c6b56e7849de9d494eb2c81f175d5ff3a12255a9de5a1b90606,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628887455899461161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd06a92e8bf04480fbd1af97073471ab25164ce1e87720724fbc0321b99f5930,PodSandboxId:beb6045bf99b7f49fe73f104182321909137192eae4966da4e80b652d4d6a0b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628887451240394708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083d0c0f0ffdebb07b140504b5b61ece9bbbc8279d12a116ad5a8606f02536a7,PodSandboxId:347c58d35c3ef20e394825467a3f437147c835910ab571ab28d985a058684b2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628887450940460005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66a092dda6eacba8681a594c2429f3e1af97d9e1840d965d99da432b60479f7,PodSandboxId:561c5b98fa34d4f4b92f6fa824dccaafc33921f26e3c825953a2fe00b21beaa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628887450654028313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a9b5c051329bdddede7a859dd793b08d,},Annotations:map[string]string{io.kubernetes.container.hash: bc145509,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32762c56b4b7de3e38ac81128a1f270dc83bf44c4ed500415b4ea1664eccf15,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628887450548016507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 10aaacd07976c51fa87fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628887375821188744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628887374984630064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628887373603213589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628887349178415096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaacd07976c51fa8
7fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1,PodSandboxId:b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628887347928352106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066dc8de0
1ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712,PodSandboxId:5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628887347549107216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a164dad53f9dd6a6588dce0ad72fad,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 21ba938b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390,PodSandboxId:574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628887347523777369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.container.hash
: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0b9a66fa-61c3-4562-ab9d-3ea38840538e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.257152136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c85bd2a8-49dc-4bfb-9df9-0fbc87a3a30a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.257329636Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c85bd2a8-49dc-4bfb-9df9-0fbc87a3a30a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.257606425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:764f2ed126e3a43ac48dcb89f104a8dcac5bbfccdb19d43f5fffe00917f83afc,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628887456054305027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95d1a6870f0a39dbd231d57d2c3ca123a182f7f0a6c8e350abf9ebd0ad3697b,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628887455962159063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882326f4f17f8c6b56e7849de9d494eb2c81f175d5ff3a12255a9de5a1b90606,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628887455899461161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd06a92e8bf04480fbd1af97073471ab25164ce1e87720724fbc0321b99f5930,PodSandboxId:beb6045bf99b7f49fe73f104182321909137192eae4966da4e80b652d4d6a0b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628887451240394708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083d0c0f0ffdebb07b140504b5b61ece9bbbc8279d12a116ad5a8606f02536a7,PodSandboxId:347c58d35c3ef20e394825467a3f437147c835910ab571ab28d985a058684b2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628887450940460005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66a092dda6eacba8681a594c2429f3e1af97d9e1840d965d99da432b60479f7,PodSandboxId:561c5b98fa34d4f4b92f6fa824dccaafc33921f26e3c825953a2fe00b21beaa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628887450654028313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a9b5c051329bdddede7a859dd793b08d,},Annotations:map[string]string{io.kubernetes.container.hash: bc145509,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32762c56b4b7de3e38ac81128a1f270dc83bf44c4ed500415b4ea1664eccf15,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628887450548016507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 10aaacd07976c51fa87fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628887375821188744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628887374984630064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628887373603213589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628887349178415096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaacd07976c51fa8
7fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1,PodSandboxId:b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628887347928352106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066dc8de0
1ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712,PodSandboxId:5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628887347549107216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a164dad53f9dd6a6588dce0ad72fad,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 21ba938b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390,PodSandboxId:574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628887347523777369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.container.hash
: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c85bd2a8-49dc-4bfb-9df9-0fbc87a3a30a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.301139225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8c12e26e-7056-4411-bc44-c88285c40abf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.301202652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8c12e26e-7056-4411-bc44-c88285c40abf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.301561318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:764f2ed126e3a43ac48dcb89f104a8dcac5bbfccdb19d43f5fffe00917f83afc,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628887456054305027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95d1a6870f0a39dbd231d57d2c3ca123a182f7f0a6c8e350abf9ebd0ad3697b,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628887455962159063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882326f4f17f8c6b56e7849de9d494eb2c81f175d5ff3a12255a9de5a1b90606,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628887455899461161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd06a92e8bf04480fbd1af97073471ab25164ce1e87720724fbc0321b99f5930,PodSandboxId:beb6045bf99b7f49fe73f104182321909137192eae4966da4e80b652d4d6a0b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628887451240394708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083d0c0f0ffdebb07b140504b5b61ece9bbbc8279d12a116ad5a8606f02536a7,PodSandboxId:347c58d35c3ef20e394825467a3f437147c835910ab571ab28d985a058684b2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628887450940460005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66a092dda6eacba8681a594c2429f3e1af97d9e1840d965d99da432b60479f7,PodSandboxId:561c5b98fa34d4f4b92f6fa824dccaafc33921f26e3c825953a2fe00b21beaa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628887450654028313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a9b5c051329bdddede7a859dd793b08d,},Annotations:map[string]string{io.kubernetes.container.hash: bc145509,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32762c56b4b7de3e38ac81128a1f270dc83bf44c4ed500415b4ea1664eccf15,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628887450548016507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 10aaacd07976c51fa87fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628887375821188744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628887374984630064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628887373603213589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628887349178415096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaacd07976c51fa8
7fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1,PodSandboxId:b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628887347928352106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066dc8de0
1ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712,PodSandboxId:5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628887347549107216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a164dad53f9dd6a6588dce0ad72fad,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 21ba938b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390,PodSandboxId:574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628887347523777369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.container.hash
: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8c12e26e-7056-4411-bc44-c88285c40abf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.346930829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=11612cd5-656f-45fe-a956-f337c750eccf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.346996571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=11612cd5-656f-45fe-a956-f337c750eccf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.347270140Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:764f2ed126e3a43ac48dcb89f104a8dcac5bbfccdb19d43f5fffe00917f83afc,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628887456054305027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95d1a6870f0a39dbd231d57d2c3ca123a182f7f0a6c8e350abf9ebd0ad3697b,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628887455962159063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882326f4f17f8c6b56e7849de9d494eb2c81f175d5ff3a12255a9de5a1b90606,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628887455899461161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd06a92e8bf04480fbd1af97073471ab25164ce1e87720724fbc0321b99f5930,PodSandboxId:beb6045bf99b7f49fe73f104182321909137192eae4966da4e80b652d4d6a0b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628887451240394708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083d0c0f0ffdebb07b140504b5b61ece9bbbc8279d12a116ad5a8606f02536a7,PodSandboxId:347c58d35c3ef20e394825467a3f437147c835910ab571ab28d985a058684b2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628887450940460005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66a092dda6eacba8681a594c2429f3e1af97d9e1840d965d99da432b60479f7,PodSandboxId:561c5b98fa34d4f4b92f6fa824dccaafc33921f26e3c825953a2fe00b21beaa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628887450654028313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a9b5c051329bdddede7a859dd793b08d,},Annotations:map[string]string{io.kubernetes.container.hash: bc145509,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32762c56b4b7de3e38ac81128a1f270dc83bf44c4ed500415b4ea1664eccf15,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628887450548016507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 10aaacd07976c51fa87fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628887375821188744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628887374984630064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628887373603213589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628887349178415096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaacd07976c51fa8
7fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1,PodSandboxId:b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628887347928352106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066dc8de0
1ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712,PodSandboxId:5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628887347549107216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a164dad53f9dd6a6588dce0ad72fad,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 21ba938b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390,PodSandboxId:574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628887347523777369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.container.hash
: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=11612cd5-656f-45fe-a956-f337c750eccf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.395624041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4c8e82c5-2283-42e8-9a63-7e8eee12de88 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.395690032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4c8e82c5-2283-42e8-9a63-7e8eee12de88 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.396018558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:764f2ed126e3a43ac48dcb89f104a8dcac5bbfccdb19d43f5fffe00917f83afc,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628887456054305027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95d1a6870f0a39dbd231d57d2c3ca123a182f7f0a6c8e350abf9ebd0ad3697b,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628887455962159063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882326f4f17f8c6b56e7849de9d494eb2c81f175d5ff3a12255a9de5a1b90606,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628887455899461161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd06a92e8bf04480fbd1af97073471ab25164ce1e87720724fbc0321b99f5930,PodSandboxId:beb6045bf99b7f49fe73f104182321909137192eae4966da4e80b652d4d6a0b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628887451240394708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083d0c0f0ffdebb07b140504b5b61ece9bbbc8279d12a116ad5a8606f02536a7,PodSandboxId:347c58d35c3ef20e394825467a3f437147c835910ab571ab28d985a058684b2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628887450940460005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66a092dda6eacba8681a594c2429f3e1af97d9e1840d965d99da432b60479f7,PodSandboxId:561c5b98fa34d4f4b92f6fa824dccaafc33921f26e3c825953a2fe00b21beaa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628887450654028313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a9b5c051329bdddede7a859dd793b08d,},Annotations:map[string]string{io.kubernetes.container.hash: bc145509,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32762c56b4b7de3e38ac81128a1f270dc83bf44c4ed500415b4ea1664eccf15,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628887450548016507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 10aaacd07976c51fa87fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628887375821188744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628887374984630064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628887373603213589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628887349178415096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaacd07976c51fa8
7fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1,PodSandboxId:b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628887347928352106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066dc8de0
1ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712,PodSandboxId:5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628887347549107216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a164dad53f9dd6a6588dce0ad72fad,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 21ba938b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390,PodSandboxId:574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628887347523777369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.container.hash
: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4c8e82c5-2283-42e8-9a63-7e8eee12de88 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.420214742Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=f75c0e25-6bab-4ce8-8349-01fda39bccce name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.420517770Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:beb6045bf99b7f49fe73f104182321909137192eae4966da4e80b652d4d6a0b8,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-20210813204102-30853,Uid:29b5a3494fd7c53351d2b61e9b662a3a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628887449861833202,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 29b5a3494fd7c53351d2b61e9b662a3a,kubernetes.io/config.seen: 2021-08-13T20:44:08.988102329Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:347c58d35c3ef20e394825467a3f437147c835910ab571ab28d985a058684b2d,M
etadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-20210813204102-30853,Uid:c7178d8492f798ee160e507a1f6158eb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628887449799945891,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c7178d8492f798ee160e507a1f6158eb,kubernetes.io/config.seen: 2021-08-13T20:44:08.988095627Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:561c5b98fa34d4f4b92f6fa824dccaafc33921f26e3c825953a2fe00b21beaa6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-20210813204102-30853,Uid:a9b5c051329bdddede7a859dd793b08d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628887449787180011,Labels:map[string]string{compon
ent: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9b5c051329bdddede7a859dd793b08d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a9b5c051329bdddede7a859dd793b08d,kubernetes.io/config.seen: 2021-08-13T20:44:08.988089124Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:57d1deb0-cf54-4d69-bfc3-be8dc66981e8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628887375112593071,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotations:map[string]string{
kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2021-08-13T20:42:54.733999851Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&PodSandboxMetadata{Name:kube-proxy-487tx,Uid:f1df66f0-7c13-4c31-ab6a-49d1396711ba,Namespace:kube-system,Attempt
:0,},State:SANDBOX_READY,CreatedAt:1628887373855523520,Labels:map[string]string{controller-revision-hash: 68bd87b66,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-13T20:42:52.507975655Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&PodSandboxMetadata{Name:coredns-6955765f44-dxl54,Uid:73136e6c-4b55-4e8d-939f-f04181286524,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628887372963346732,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,k8s-app: kube-dns,pod-template-hash: 6955765f44,},Annotations:map[str
ing]string{kubernetes.io/config.seen: 2021-08-13T20:42:52.51147394Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-20210813204102-30853,Uid:19a164dad53f9dd6a6588dce0ad72fad,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1628887346650009166,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a164dad53f9dd6a6588dce0ad72fad,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 19a164dad53f9dd6a6588dce0ad72fad,kubernetes.io/config.seen: 2021-08-13T20:42:24.802657506Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager
-test-preload-20210813204102-30853,Uid:603b914543a305bf066dc8de01ce2232,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1628887346645483597,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066dc8de01ce2232,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 603b914543a305bf066dc8de01ce2232,kubernetes.io/config.seen: 2021-08-13T20:42:24.802660394Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-20210813204102-30853,Uid:10aaacd07976c51fa87fbefd9b86418c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628887346636826260,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.n
ame: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaacd07976c51fa87fbefd9b86418c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10aaacd07976c51fa87fbefd9b86418c,kubernetes.io/config.seen: 2021-08-13T20:42:24.802651456Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-20210813204102-30853,Uid:bb577061a17ad23cfbbf52e9419bf32a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1628887346617482863,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bb577061a17ad23cfbbf52e9419bf32a,kuber
netes.io/config.seen: 2021-08-13T20:42:24.802662662Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=f75c0e25-6bab-4ce8-8349-01fda39bccce name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.421502064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=daf3dcd5-d3b7-4094-81ca-873028dad643 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.421632206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=daf3dcd5-d3b7-4094-81ca-873028dad643 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.422007014Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:764f2ed126e3a43ac48dcb89f104a8dcac5bbfccdb19d43f5fffe00917f83afc,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628887456054305027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95d1a6870f0a39dbd231d57d2c3ca123a182f7f0a6c8e350abf9ebd0ad3697b,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628887455962159063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882326f4f17f8c6b56e7849de9d494eb2c81f175d5ff3a12255a9de5a1b90606,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628887455899461161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd06a92e8bf04480fbd1af97073471ab25164ce1e87720724fbc0321b99f5930,PodSandboxId:beb6045bf99b7f49fe73f104182321909137192eae4966da4e80b652d4d6a0b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628887451240394708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083d0c0f0ffdebb07b140504b5b61ece9bbbc8279d12a116ad5a8606f02536a7,PodSandboxId:347c58d35c3ef20e394825467a3f437147c835910ab571ab28d985a058684b2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628887450940460005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66a092dda6eacba8681a594c2429f3e1af97d9e1840d965d99da432b60479f7,PodSandboxId:561c5b98fa34d4f4b92f6fa824dccaafc33921f26e3c825953a2fe00b21beaa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628887450654028313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a9b5c051329bdddede7a859dd793b08d,},Annotations:map[string]string{io.kubernetes.container.hash: bc145509,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32762c56b4b7de3e38ac81128a1f270dc83bf44c4ed500415b4ea1664eccf15,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628887450548016507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 10aaacd07976c51fa87fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628887375821188744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628887374984630064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628887373603213589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628887349178415096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaacd07976c51fa8
7fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1,PodSandboxId:b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628887347928352106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066dc8de0
1ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712,PodSandboxId:5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628887347549107216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a164dad53f9dd6a6588dce0ad72fad,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 21ba938b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390,PodSandboxId:574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628887347523777369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.container.hash
: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=daf3dcd5-d3b7-4094-81ca-873028dad643 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.436721905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d835b4f5-2ff5-4ddc-a2d1-f8b36c363946 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.436981616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d835b4f5-2ff5-4ddc-a2d1-f8b36c363946 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.437444211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:764f2ed126e3a43ac48dcb89f104a8dcac5bbfccdb19d43f5fffe00917f83afc,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628887456054305027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95d1a6870f0a39dbd231d57d2c3ca123a182f7f0a6c8e350abf9ebd0ad3697b,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628887455962159063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882326f4f17f8c6b56e7849de9d494eb2c81f175d5ff3a12255a9de5a1b90606,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628887455899461161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd06a92e8bf04480fbd1af97073471ab25164ce1e87720724fbc0321b99f5930,PodSandboxId:beb6045bf99b7f49fe73f104182321909137192eae4966da4e80b652d4d6a0b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628887451240394708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083d0c0f0ffdebb07b140504b5b61ece9bbbc8279d12a116ad5a8606f02536a7,PodSandboxId:347c58d35c3ef20e394825467a3f437147c835910ab571ab28d985a058684b2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628887450940460005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66a092dda6eacba8681a594c2429f3e1af97d9e1840d965d99da432b60479f7,PodSandboxId:561c5b98fa34d4f4b92f6fa824dccaafc33921f26e3c825953a2fe00b21beaa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628887450654028313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a9b5c051329bdddede7a859dd793b08d,},Annotations:map[string]string{io.kubernetes.container.hash: bc145509,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32762c56b4b7de3e38ac81128a1f270dc83bf44c4ed500415b4ea1664eccf15,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628887450548016507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 10aaacd07976c51fa87fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628887375821188744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628887374984630064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628887373603213589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628887349178415096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaacd07976c51fa8
7fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1,PodSandboxId:b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628887347928352106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066dc8de0
1ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712,PodSandboxId:5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628887347549107216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a164dad53f9dd6a6588dce0ad72fad,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 21ba938b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390,PodSandboxId:574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628887347523777369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.container.hash
: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d835b4f5-2ff5-4ddc-a2d1-f8b36c363946 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.481743276Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=224858da-fdab-4957-94bc-6e68cc3c97b9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.481983288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=224858da-fdab-4957-94bc-6e68cc3c97b9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:44:23 test-preload-20210813204102-30853 crio[4701]: time="2021-08-13 20:44:23.483243157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:764f2ed126e3a43ac48dcb89f104a8dcac5bbfccdb19d43f5fffe00917f83afc,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628887456054305027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95d1a6870f0a39dbd231d57d2c3ca123a182f7f0a6c8e350abf9ebd0ad3697b,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628887455962159063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882326f4f17f8c6b56e7849de9d494eb2c81f175d5ff3a12255a9de5a1b90606,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628887455899461161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd06a92e8bf04480fbd1af97073471ab25164ce1e87720724fbc0321b99f5930,PodSandboxId:beb6045bf99b7f49fe73f104182321909137192eae4966da4e80b652d4d6a0b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628887451240394708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083d0c0f0ffdebb07b140504b5b61ece9bbbc8279d12a116ad5a8606f02536a7,PodSandboxId:347c58d35c3ef20e394825467a3f437147c835910ab571ab28d985a058684b2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628887450940460005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66a092dda6eacba8681a594c2429f3e1af97d9e1840d965d99da432b60479f7,PodSandboxId:561c5b98fa34d4f4b92f6fa824dccaafc33921f26e3c825953a2fe00b21beaa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628887450654028313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a9b5c051329bdddede7a859dd793b08d,},Annotations:map[string]string{io.kubernetes.container.hash: bc145509,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32762c56b4b7de3e38ac81128a1f270dc83bf44c4ed500415b4ea1664eccf15,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628887450548016507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 10aaacd07976c51fa87fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74,PodSandboxId:16e102daa652e5afa5e308bb035cc664e3027df1be43eb538f189b9906c12dc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628887375821188744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d1deb0-cf54-4d69-bfc3-be8dc66981e8,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4e50075b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb,PodSandboxId:c5f50c613956e0225fbfc8f07b359aebe56d125433906bc6fb04d41e463c71b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628887374984630064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-487tx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df66f0-7c13-4c31-ab6a-49d1396711ba,},Annotations:map[string]string{io.kubernetes.container.hash: 67b8d45c,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e,PodSandboxId:e7421e38ce5c28390c49e8da59a7a9e840356971473275a6cd4367c04452a227,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628887373603213589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-dxl54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73136e6c-4b55-4e8d-939f-f04181286524,},Annotations:map[string]string{io.kubernetes.container.hash: 2699146b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb,PodSandboxId:21a3725fdbb84da457d34efd899f40af71568af91289e1451bf77dc94b5beb03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628887349178415096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaacd07976c51fa8
7fbefd9b86418c,},Annotations:map[string]string{io.kubernetes.container.hash: b066e24e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1,PodSandboxId:b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628887347928352106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066dc8de0
1ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712,PodSandboxId:5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628887347549107216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a164dad53f9dd6a6588dce0ad72fad,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 21ba938b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390,PodSandboxId:574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628887347523777369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813204102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.container.hash
: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=224858da-fdab-4957-94bc-6e68cc3c97b9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	764f2ed126e3a       7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19   7 seconds ago        Running             kube-proxy                1                   c5f50c613956e
	c95d1a6870f0a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   7 seconds ago        Running             storage-provisioner       1                   16e102daa652e
	882326f4f17f8       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61   7 seconds ago        Running             coredns                   1                   e7421e38ce5c2
	fd06a92e8bf04       d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad   12 seconds ago       Running             kube-scheduler            0                   beb6045bf99b7
	083d0c0f0ffde       b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302   12 seconds ago       Running             kube-controller-manager   0                   347c58d35c3ef
	c66a092dda6ea       90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b   12 seconds ago       Running             kube-apiserver            0                   561c5b98fa34d
	f32762c56b4b7       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f   13 seconds ago       Running             etcd                      1                   21a3725fdbb84
	13ae50634abed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       0                   16e102daa652e
	0b0890b6e3342       7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19   About a minute ago   Exited              kube-proxy                0                   c5f50c613956e
	5a83b5d83f73a       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61   About a minute ago   Exited              coredns                   0                   e7421e38ce5c2
	4a7350a3413ad       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f   About a minute ago   Exited              etcd                      0                   21a3725fdbb84
	89b4071bcb867       5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056   About a minute ago   Exited              kube-controller-manager   0                   b58850b481f1c
	03d5024b4d0a9       0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2   About a minute ago   Exited              kube-apiserver            0                   5a62bb610d1a4
	69df9d9754464       78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28   About a minute ago   Exited              kube-scheduler            0                   574cfc517baf6
	
	* 
	* ==> coredns [5a83b5d83f73a6bedc57cc9502244e244ed5099319184fde7dc06db54746948e] <==
	* linux/amd64, go1.13.4, c2fd1b2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0813 20:43:23.825841       1 trace.go:82] Trace[268963386]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2021-08-13 20:42:53.824274069 +0000 UTC m=+0.080636130) (total time: 30.001333445s):
	Trace[268963386]: [30.001333445s] [30.001333445s] END
	E0813 20:43:23.826033       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0813 20:43:23.826033       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0813 20:43:23.826033       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:43:23.825842       1 trace.go:82] Trace[121170379]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2021-08-13 20:42:53.824292364 +0000 UTC m=+0.080654450) (total time: 30.001335614s):
	Trace[121170379]: [30.001335614s] [30.001335614s] END
	E0813 20:43:23.826072       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0813 20:43:23.826072       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0813 20:43:23.826072       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:43:23.828434       1 trace.go:82] Trace[1615673136]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2021-08-13 20:42:53.827243758 +0000 UTC m=+0.083605530) (total time: 30.00117519s):
	Trace[1615673136]: [30.00117519s] [30.00117519s] END
	E0813 20:43:23.828525       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0813 20:43:23.828525       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0813 20:43:23.828525       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = ce724ab0839054f2e7df24df11d60a5e
	[INFO] Reloading complete
	E0813 20:43:23.826033       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0813 20:43:23.826072       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0813 20:43:23.828525       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [882326f4f17f8c6b56e7849de9d494eb2c81f175d5ff3a12255a9de5a1b90606] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = ce724ab0839054f2e7df24df11d60a5e
	CoreDNS-1.6.5
	linux/amd64, go1.13.4, c2fd1b2
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-20210813204102-30853
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-20210813204102-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=test-preload-20210813204102-30853
	                    minikube.k8s.io/updated_at=2021_08_13T20_42_37_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:42:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-20210813204102-30853
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:44:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:44:16 +0000   Fri, 13 Aug 2021 20:42:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:44:16 +0000   Fri, 13 Aug 2021 20:42:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:44:16 +0000   Fri, 13 Aug 2021 20:42:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:44:16 +0000   Fri, 13 Aug 2021 20:42:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    test-preload-20210813204102-30853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5b70d23241447eb907f86c91137f298
	  System UUID:                e5b70d23-2414-47eb-907f-86c91137f298
	  Boot ID:                    da7623eb-d9e9-4ada-8129-59e337ed5c67
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.17.3
	  Kube-Proxy Version:         v1.17.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6955765f44-dxl54                                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     91s
	  kube-system                 etcd-test-preload-20210813204102-30853                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-apiserver-test-preload-20210813204102-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kube-controller-manager-test-preload-20210813204102-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kube-proxy-487tx                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-test-preload-20210813204102-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (3%!)(MISSING)   170Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                           Message
	  ----    ------                   ----                 ----                                           -------
	  Normal  NodeHasSufficientMemory  118s (x5 over 119s)  kubelet, test-preload-20210813204102-30853     Node test-preload-20210813204102-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x5 over 119s)  kubelet, test-preload-20210813204102-30853     Node test-preload-20210813204102-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x5 over 119s)  kubelet, test-preload-20210813204102-30853     Node test-preload-20210813204102-30853 status is now: NodeHasSufficientPID
	  Normal  Starting                 106s                 kubelet, test-preload-20210813204102-30853     Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s                 kubelet, test-preload-20210813204102-30853     Node test-preload-20210813204102-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s                 kubelet, test-preload-20210813204102-30853     Node test-preload-20210813204102-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s                 kubelet, test-preload-20210813204102-30853     Node test-preload-20210813204102-30853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet, test-preload-20210813204102-30853     Updated Node Allocatable limit across pods
	  Normal  NodeReady                96s                  kubelet, test-preload-20210813204102-30853     Node test-preload-20210813204102-30853 status is now: NodeReady
	  Normal  Starting                 88s                  kube-proxy, test-preload-20210813204102-30853  Starting kube-proxy.
	  Normal  Starting                 14s                  kubelet, test-preload-20210813204102-30853     Starting kubelet.
	  Normal  NodeHasSufficientMemory  14s (x8 over 14s)    kubelet, test-preload-20210813204102-30853     Node test-preload-20210813204102-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x8 over 14s)    kubelet, test-preload-20210813204102-30853     Node test-preload-20210813204102-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x7 over 14s)    kubelet, test-preload-20210813204102-30853     Node test-preload-20210813204102-30853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14s                  kubelet, test-preload-20210813204102-30853     Updated Node Allocatable limit across pods
	  Normal  Starting                 7s                   kube-proxy, test-preload-20210813204102-30853  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +4.729713] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000136] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +4.155711] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.034750] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.992176] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1731 comm=systemd-network
	[  +1.301200] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006027] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.389793] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +6.275239] systemd-fstab-generator[2143]: Ignoring "noauto" for root device
	[  +0.122912] systemd-fstab-generator[2156]: Ignoring "noauto" for root device
	[  +0.177268] systemd-fstab-generator[2182]: Ignoring "noauto" for root device
	[Aug13 20:42] systemd-fstab-generator[3316]: Ignoring "noauto" for root device
	[ +15.721344] systemd-fstab-generator[3745]: Ignoring "noauto" for root device
	[ +15.868726] kauditd_printk_skb: 29 callbacks suppressed
	[Aug13 20:43] NFSD: Unable to end grace period: -110
	[ +15.625078] kauditd_printk_skb: 68 callbacks suppressed
	[  +7.702370] systemd-fstab-generator[4973]: Ignoring "noauto" for root device
	[  +0.212596] systemd-fstab-generator[4986]: Ignoring "noauto" for root device
	[  +0.237015] systemd-fstab-generator[5007]: Ignoring "noauto" for root device
	[Aug13 20:44] systemd-fstab-generator[6089]: Ignoring "noauto" for root device
	[  +7.932677] kauditd_printk_skb: 20 callbacks suppressed
	
	* 
	* ==> etcd [4a7350a3413adfdc44fe446a6a38dbcd43a87767551bc424da8a8fdeace17fbb] <==
	* 2021-08-13 20:42:29.300166 I | etcdserver: 4e6b9cdcc1ed933f as single-node; fast-forwarding 9 ticks (election ticks 10)
	2021-08-13 20:42:29.300572 I | embed: listening for peers on 192.168.39.171:2380
	raft2021/08/13 20:42:29 INFO: 4e6b9cdcc1ed933f switched to configuration voters=(5650782629426729791)
	2021-08-13 20:42:29.301230 I | etcdserver/membership: added member 4e6b9cdcc1ed933f [https://192.168.39.171:2380] to cluster c9ee22fca1de3e71
	raft2021/08/13 20:42:30 INFO: 4e6b9cdcc1ed933f is starting a new election at term 1
	raft2021/08/13 20:42:30 INFO: 4e6b9cdcc1ed933f became candidate at term 2
	raft2021/08/13 20:42:30 INFO: 4e6b9cdcc1ed933f received MsgVoteResp from 4e6b9cdcc1ed933f at term 2
	raft2021/08/13 20:42:30 INFO: 4e6b9cdcc1ed933f became leader at term 2
	raft2021/08/13 20:42:30 INFO: raft.node: 4e6b9cdcc1ed933f elected leader 4e6b9cdcc1ed933f at term 2
	2021-08-13 20:42:30.285467 I | etcdserver: published {Name:test-preload-20210813204102-30853 ClientURLs:[https://192.168.39.171:2379]} to cluster c9ee22fca1de3e71
	2021-08-13 20:42:30.285862 I | embed: ready to serve client requests
	2021-08-13 20:42:30.286555 I | embed: ready to serve client requests
	2021-08-13 20:42:30.287130 I | embed: serving client requests on 192.168.39.171:2379
	2021-08-13 20:42:30.287487 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 20:42:30.292827 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:42:30.293445 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:42:30.295821 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:42:50.391051 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (560.356163ms) to execute
	2021-08-13 20:42:50.391536 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" " with result "range_response_count:1 size:216" took too long (632.052781ms) to execute
	2021-08-13 20:43:48.171013 W | etcdserver: read-only range request "key:\"/registry/minions\" range_end:\"/registry/miniont\" count_only:true " with result "range_response_count:0 size:7" took too long (295.720957ms) to execute
	2021-08-13 20:43:55.792537 W | etcdserver: read-only range request "key:\"/registry/podtemplates\" range_end:\"/registry/podtemplatet\" count_only:true " with result "range_response_count:0 size:5" took too long (130.23257ms) to execute
	2021-08-13 20:43:58.213851 W | etcdserver: read-only range request "key:\"/registry/limitranges\" range_end:\"/registry/limitranget\" count_only:true " with result "range_response_count:0 size:5" took too long (442.405172ms) to execute
	2021-08-13 20:44:00.631273 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (216.719711ms) to execute
	2021-08-13 20:44:04.316073 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses\" range_end:\"/registry/runtimeclasset\" count_only:true " with result "range_response_count:0 size:5" took too long (224.780453ms) to execute
	2021-08-13 20:44:06.119499 W | etcdserver: read-only range request "key:\"/registry/deployments\" range_end:\"/registry/deploymentt\" count_only:true " with result "range_response_count:0 size:7" took too long (399.416122ms) to execute
	
	* 
	* ==> etcd [f32762c56b4b7de3e38ac81128a1f270dc83bf44c4ed500415b4ea1664eccf15] <==
	* 2021-08-13 20:44:10.691280 I | embed: initial cluster = 
	2021-08-13 20:44:10.699580 I | etcdserver: restarting member 4e6b9cdcc1ed933f in cluster c9ee22fca1de3e71 at commit index 476
	raft2021/08/13 20:44:10 INFO: 4e6b9cdcc1ed933f switched to configuration voters=()
	raft2021/08/13 20:44:10 INFO: 4e6b9cdcc1ed933f became follower at term 2
	raft2021/08/13 20:44:10 INFO: newRaft 4e6b9cdcc1ed933f [peers: [], term: 2, commit: 476, applied: 0, lastindex: 476, lastterm: 2]
	2021-08-13 20:44:10.714948 W | auth: simple token is not cryptographically signed
	2021-08-13 20:44:10.719182 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2021/08/13 20:44:10 INFO: 4e6b9cdcc1ed933f switched to configuration voters=(5650782629426729791)
	2021-08-13 20:44:10.721952 I | etcdserver/membership: added member 4e6b9cdcc1ed933f [https://192.168.39.171:2380] to cluster c9ee22fca1de3e71
	2021-08-13 20:44:10.722135 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 20:44:10.722233 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:44:10.722267 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:44:10.722370 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 20:44:10.722699 I | embed: listening for peers on 192.168.39.171:2380
	raft2021/08/13 20:44:12 INFO: 4e6b9cdcc1ed933f is starting a new election at term 2
	raft2021/08/13 20:44:12 INFO: 4e6b9cdcc1ed933f became candidate at term 3
	raft2021/08/13 20:44:12 INFO: 4e6b9cdcc1ed933f received MsgVoteResp from 4e6b9cdcc1ed933f at term 3
	raft2021/08/13 20:44:12 INFO: 4e6b9cdcc1ed933f became leader at term 3
	raft2021/08/13 20:44:12 INFO: raft.node: 4e6b9cdcc1ed933f elected leader 4e6b9cdcc1ed933f at term 3
	2021-08-13 20:44:12.034120 I | etcdserver: published {Name:test-preload-20210813204102-30853 ClientURLs:[https://192.168.39.171:2379]} to cluster c9ee22fca1de3e71
	2021-08-13 20:44:12.037100 I | embed: ready to serve client requests
	2021-08-13 20:44:12.068243 I | embed: serving client requests on 192.168.39.171:2379
	2021-08-13 20:44:12.072827 I | embed: ready to serve client requests
	2021-08-13 20:44:12.114003 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:44:17.398237 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-test-preload-20210813204102-30853\" " with result "range_response_count:1 size:2496" took too long (258.493258ms) to execute
	
	* 
	* ==> kernel <==
	*  20:44:23 up 3 min,  0 users,  load average: 1.78, 0.98, 0.39
	Linux test-preload-20210813204102-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [03d5024b4d0a913912c7f9f287504c4011c4706ad669be55efe0c25ffd5aa712] <==
	* W0813 20:44:06.794846       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.794954       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.794990       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795018       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795082       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795130       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795197       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795233       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795311       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795371       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795412       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795479       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795514       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795571       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795610       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795638       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795732       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795760       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.795834       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.796403       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.796477       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.796506       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.796539       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:44:06.796674       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [c66a092dda6eacba8681a594c2429f3e1af97d9e1840d965d99da432b60479f7] <==
	* I0813 20:44:15.251470       1 crd_finalizer.go:263] Starting CRDFinalizer
	I0813 20:44:15.255172       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0813 20:44:15.255267       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
	I0813 20:44:15.255276       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0813 20:44:15.263664       1 controller.go:85] Starting OpenAPI controller
	I0813 20:44:15.276528       1 customresource_discovery_controller.go:208] Starting DiscoveryController
	I0813 20:44:15.276655       1 naming_controller.go:288] Starting NamingConditionController
	I0813 20:44:15.276676       1 establishing_controller.go:73] Starting EstablishingController
	I0813 20:44:15.276694       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
	I0813 20:44:15.276720       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0813 20:44:15.338853       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:44:15.344720       1 cache.go:39] Caches are synced for autoregister controller
	I0813 20:44:15.360393       1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
	I0813 20:44:15.360546       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 20:44:15.361413       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0813 20:44:15.366550       1 controller.go:151] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0813 20:44:16.243309       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0813 20:44:16.243340       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 20:44:16.243352       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 20:44:16.274165       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
	I0813 20:44:17.618744       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0813 20:44:17.653660       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0813 20:44:17.830717       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0813 20:44:17.922964       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:44:17.933726       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [083d0c0f0ffdebb07b140504b5b61ece9bbbc8279d12a116ad5a8606f02536a7] <==
	* I0813 20:44:18.631028       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
	I0813 20:44:18.631066       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
	I0813 20:44:18.631132       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
	I0813 20:44:18.631172       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
	I0813 20:44:18.631208       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	I0813 20:44:18.631263       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I0813 20:44:18.631318       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
	I0813 20:44:18.631438       1 controllermanager.go:533] Started "resourcequota"
	I0813 20:44:18.631525       1 resource_quota_controller.go:271] Starting resource quota controller
	I0813 20:44:18.631648       1 shared_informer.go:197] Waiting for caches to sync for resource quota
	I0813 20:44:18.631679       1 resource_quota_monitor.go:303] QuotaMonitor running
	I0813 20:44:18.645453       1 controllermanager.go:533] Started "csrsigning"
	I0813 20:44:18.646128       1 certificate_controller.go:118] Starting certificate controller "csrsigning"
	I0813 20:44:18.646268       1 shared_informer.go:197] Waiting for caches to sync for certificate-csrsigning
	I0813 20:44:18.655263       1 node_lifecycle_controller.go:388] Sending events to api server.
	I0813 20:44:18.655469       1 node_lifecycle_controller.go:423] Controller is using taint based evictions.
	I0813 20:44:18.655543       1 taint_manager.go:162] Sending events to api server.
	I0813 20:44:18.655612       1 node_lifecycle_controller.go:520] Controller will reconcile labels.
	I0813 20:44:18.655654       1 controllermanager.go:533] Started "nodelifecycle"
	I0813 20:44:18.655859       1 node_lifecycle_controller.go:554] Starting node controller
	I0813 20:44:18.655987       1 shared_informer.go:197] Waiting for caches to sync for taint
	I0813 20:44:18.677074       1 controllermanager.go:533] Started "disruption"
	I0813 20:44:18.677328       1 disruption.go:330] Starting disruption controller
	I0813 20:44:18.678278       1 shared_informer.go:197] Waiting for caches to sync for disruption
	I0813 20:44:18.708418       1 node_ipam_controller.go:94] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [89b4071bcb867328939db35afd806d4fdf824585e25f01e9259cea151f6c71e1] <==
	* I0813 20:42:52.415577       1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"test-preload-20210813204102-30853", UID:"b3ae3125-0a1e-4e8e-b463-c8069bed27ff", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node test-preload-20210813204102-30853 event: Registered Node test-preload-20210813204102-30853 in Controller
	I0813 20:42:52.440334       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0813 20:42:52.440422       1 shared_informer.go:204] Caches are synced for service account 
	I0813 20:42:52.440628       1 shared_informer.go:204] Caches are synced for endpoint 
	I0813 20:42:52.442785       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"d22a3ee5-3085-46f2-9384-6837f4bbd73a", APIVersion:"apps/v1", ResourceVersion:"300", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-lct6t
	I0813 20:42:52.473770       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"d22a3ee5-3085-46f2-9384-6837f4bbd73a", APIVersion:"apps/v1", ResourceVersion:"300", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-dxl54
	I0813 20:42:52.484326       1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"ddb3dca1-6441-4b25-a542-df6cd3a3d1c8", APIVersion:"apps/v1", ResourceVersion:"200", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-487tx
	I0813 20:42:52.496854       1 shared_informer.go:204] Caches are synced for namespace 
	I0813 20:42:52.527155       1 shared_informer.go:204] Caches are synced for job 
	I0813 20:42:52.648496       1 shared_informer.go:204] Caches are synced for ReplicationController 
	I0813 20:42:52.728260       1 shared_informer.go:204] Caches are synced for attach detach 
	I0813 20:42:52.754822       1 shared_informer.go:204] Caches are synced for stateful set 
	I0813 20:42:52.756328       1 shared_informer.go:204] Caches are synced for PVC protection 
	I0813 20:42:52.758301       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0813 20:42:52.761556       1 shared_informer.go:204] Caches are synced for disruption 
	I0813 20:42:52.761599       1 disruption.go:338] Sending events to api server.
	I0813 20:42:52.806262       1 shared_informer.go:204] Caches are synced for expand 
	I0813 20:42:52.810563       1 shared_informer.go:204] Caches are synced for resource quota 
	I0813 20:42:52.861958       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0813 20:42:52.862036       1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:42:52.862126       1 shared_informer.go:204] Caches are synced for resource quota 
	I0813 20:42:52.869070       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"2879eb2d-d2f8-4e17-b692-5d4e38963413", APIVersion:"apps/v1", ResourceVersion:"343", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-6955765f44 to 1
	I0813 20:42:52.881984       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"d22a3ee5-3085-46f2-9384-6837f4bbd73a", APIVersion:"apps/v1", ResourceVersion:"344", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-6955765f44-lct6t
	I0813 20:42:53.756862       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
	I0813 20:42:53.756986       1 shared_informer.go:204] Caches are synced for garbage collector 
	
	* 
	* ==> kube-proxy [0b0890b6e3342bd5b4c19e624cc7c18bb262b02ca69d0c78ef107dd8997d3fbb] <==
	* W0813 20:42:55.357718       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
	I0813 20:42:55.374656       1 node.go:135] Successfully retrieved node IP: 192.168.39.171
	I0813 20:42:55.374726       1 server_others.go:145] Using iptables Proxier.
	I0813 20:42:55.376496       1 server.go:571] Version: v1.17.0
	I0813 20:42:55.387196       1 config.go:313] Starting service config controller
	I0813 20:42:55.388545       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0813 20:42:55.390032       1 config.go:131] Starting endpoints config controller
	I0813 20:42:55.391341       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0813 20:42:55.491315       1 shared_informer.go:204] Caches are synced for service config 
	I0813 20:42:55.491815       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-proxy [764f2ed126e3a43ac48dcb89f104a8dcac5bbfccdb19d43f5fffe00917f83afc] <==
	* W0813 20:44:16.440595       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
	I0813 20:44:16.457208       1 node.go:135] Successfully retrieved node IP: 192.168.39.171
	I0813 20:44:16.457453       1 server_others.go:145] Using iptables Proxier.
	I0813 20:44:16.458118       1 server.go:571] Version: v1.17.0
	I0813 20:44:16.464028       1 config.go:131] Starting endpoints config controller
	I0813 20:44:16.464063       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0813 20:44:16.464108       1 config.go:313] Starting service config controller
	I0813 20:44:16.464113       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0813 20:44:16.564383       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0813 20:44:16.564701       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [69df9d9754464dd45f3e29f80b17c66a436319b8f1821cdedfcc0f4a8d3d0390] <==
	* E0813 20:42:35.099172       1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:42:35.107227       1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:42:35.108853       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:42:35.110002       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:42:35.110816       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:42:35.113375       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:42:35.117611       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:42:35.118843       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:42:35.119036       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:42:35.121197       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:42:35.123403       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:42:35.123457       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0813 20:42:36.190071       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0813 20:44:07.230229       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=7m48s&timeoutSeconds=468&watch=true: dial tcp 192.168.39.171:8443: connect: connection refused
	E0813 20:44:07.230462       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=1&timeout=9m0s&timeoutSeconds=540&watch=true: dial tcp 192.168.39.171:8443: connect: connection refused
	E0813 20:44:07.230636       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=9m46s&timeoutSeconds=586&watch=true: dial tcp 192.168.39.171:8443: connect: connection refused
	E0813 20:44:07.230729       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=5m7s&timeoutSeconds=307&watch=true: dial tcp 192.168.39.171:8443: connect: connection refused
	E0813 20:44:07.231472       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=8m49s&timeoutSeconds=529&watch=true: dial tcp 192.168.39.171:8443: connect: connection refused
	E0813 20:44:07.231683       1 reflector.go:320] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&resourceVersion=155&timeout=9m12s&timeoutSeconds=552&watch=true: dial tcp 192.168.39.171:8443: connect: connection refused
	E0813 20:44:07.231782       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=11&timeout=8m28s&timeoutSeconds=508&watch=true: dial tcp 192.168.39.171:8443: connect: connection refused
	E0813 20:44:07.231806       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=195&timeout=8m44s&timeoutSeconds=524&watch=true: dial tcp 192.168.39.171:8443: connect: connection refused
	E0813 20:44:07.231834       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=371&timeout=6m8s&timeoutSeconds=368&watch=true: dial tcp 192.168.39.171:8443: connect: connection refused
	E0813 20:44:07.232199       1 reflector.go:320] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&resourceVersion=421&timeoutSeconds=402&watch=true: dial tcp 192.168.39.171:8443: connect: connection refused
	E0813 20:44:07.232378       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=307&timeout=9m3s&timeoutSeconds=543&watch=true: dial tcp 192.168.39.171:8443: connect: connection refused
	E0813 20:44:07.232629       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=423&timeout=9m51s&timeoutSeconds=591&watch=true: dial tcp 192.168.39.171:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [fd06a92e8bf04480fbd1af97073471ab25164ce1e87720724fbc0321b99f5930] <==
	* I0813 20:44:11.988001       1 serving.go:312] Generated self-signed cert in-memory
	W0813 20:44:12.375053       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
	W0813 20:44:12.375323       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
	W0813 20:44:15.379040       1 authorization.go:47] Authorization is disabled
	W0813 20:44:15.379187       1 authentication.go:92] Authentication is disabled
	I0813 20:44:15.379231       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0813 20:44:15.382323       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0813 20:44:15.383355       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0813 20:44:15.389914       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0813 20:44:15.383618       1 tlsconfig.go:219] Starting DynamicServingCertificateController
	I0813 20:44:15.393645       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:44:15.393755       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:44:15.490317       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	I0813 20:44:15.494110       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:41:17 UTC, end at Fri 2021-08-13 20:44:24 UTC. --
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: E0813 20:44:15.184938    6097 kubelet.go:2263] node "test-preload-20210813204102-30853" not found
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: E0813 20:44:15.308685    6097 kubelet.go:2263] node "test-preload-20210813204102-30853" not found
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:15.322815    6097 kubelet.go:1645] Trying to delete pod kube-apiserver-test-preload-20210813204102-30853_kube-system 32c5ef49-7068-4bec-9923-5b62e5d42cff
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:15.323385    6097 kubelet.go:1645] Trying to delete pod kube-scheduler-test-preload-20210813204102-30853_kube-system 440a8888-5b5d-4753-931d-525b4589c854
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:15.324477    6097 kubelet.go:1645] Trying to delete pod kube-controller-manager-test-preload-20210813204102-30853_kube-system c6140253-68ba-4c77-883e-1d59ed05920e
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:15.404462    6097 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f1df66f0-7c13-4c31-ab6a-49d1396711ba-kube-proxy") pod "kube-proxy-487tx" (UID: "f1df66f0-7c13-4c31-ab6a-49d1396711ba")
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:15.404760    6097 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-jr9mj" (UniqueName: "kubernetes.io/secret/f1df66f0-7c13-4c31-ab6a-49d1396711ba-kube-proxy-token-jr9mj") pod "kube-proxy-487tx" (UID: "f1df66f0-7c13-4c31-ab6a-49d1396711ba")
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:15.405121    6097 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/57d1deb0-cf54-4d69-bfc3-be8dc66981e8-tmp") pod "storage-provisioner" (UID: "57d1deb0-cf54-4d69-bfc3-be8dc66981e8")
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:15.405351    6097 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/f1df66f0-7c13-4c31-ab6a-49d1396711ba-xtables-lock") pod "kube-proxy-487tx" (UID: "f1df66f0-7c13-4c31-ab6a-49d1396711ba")
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:15.405381    6097 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-76k85" (UniqueName: "kubernetes.io/secret/73136e6c-4b55-4e8d-939f-f04181286524-coredns-token-76k85") pod "coredns-6955765f44-dxl54" (UID: "73136e6c-4b55-4e8d-939f-f04181286524")
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:15.405408    6097 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/f1df66f0-7c13-4c31-ab6a-49d1396711ba-lib-modules") pod "kube-proxy-487tx" (UID: "f1df66f0-7c13-4c31-ab6a-49d1396711ba")
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:15.405429    6097 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-5h8dk" (UniqueName: "kubernetes.io/secret/57d1deb0-cf54-4d69-bfc3-be8dc66981e8-storage-provisioner-token-5h8dk") pod "storage-provisioner" (UID: "57d1deb0-cf54-4d69-bfc3-be8dc66981e8")
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:15.405449    6097 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/73136e6c-4b55-4e8d-939f-f04181286524-config-volume") pod "coredns-6955765f44-dxl54" (UID: "73136e6c-4b55-4e8d-939f-f04181286524")
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:15.405474    6097 reconciler.go:156] Reconciler: start to sync state
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: W0813 20:44:15.406708    6097 kubelet.go:1649] Deleted mirror pod "kube-scheduler-test-preload-20210813204102-30853_kube-system(440a8888-5b5d-4753-931d-525b4589c854)" because it is outdated
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:15.409370    6097 kuberuntime_manager.go:981] updating runtime config through cri with podcidr 10.244.0.0/24
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:15.410530    6097 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: W0813 20:44:15.438015    6097 kubelet.go:1649] Deleted mirror pod "kube-controller-manager-test-preload-20210813204102-30853_kube-system(c6140253-68ba-4c77-883e-1d59ed05920e)" because it is outdated
	Aug 13 20:44:15 test-preload-20210813204102-30853 kubelet[6097]: W0813 20:44:15.438461    6097 kubelet.go:1649] Deleted mirror pod "kube-apiserver-test-preload-20210813204102-30853_kube-system(32c5ef49-7068-4bec-9923-5b62e5d42cff)" because it is outdated
	Aug 13 20:44:16 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:16.326797    6097 kubelet.go:1645] Trying to delete pod kube-scheduler-test-preload-20210813204102-30853_kube-system 440a8888-5b5d-4753-931d-525b4589c854
	Aug 13 20:44:16 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:16.333184    6097 kubelet_node_status.go:112] Node test-preload-20210813204102-30853 was previously registered
	Aug 13 20:44:16 test-preload-20210813204102-30853 kubelet[6097]: I0813 20:44:16.333958    6097 kubelet_node_status.go:73] Successfully registered node test-preload-20210813204102-30853
	Aug 13 20:44:18 test-preload-20210813204102-30853 kubelet[6097]: W0813 20:44:18.375934    6097 pod_container_deletor.go:75] Container "b58850b481f1c49856203d52d33845888b11ad9da4e5bdf341e7af45ed2d968f" not found in pod's containers
	Aug 13 20:44:18 test-preload-20210813204102-30853 kubelet[6097]: W0813 20:44:18.389006    6097 pod_container_deletor.go:75] Container "574cfc517baf6f1d436993aa3f442a5d33d4a607bc84913a5213c500a30ca928" not found in pod's containers
	Aug 13 20:44:18 test-preload-20210813204102-30853 kubelet[6097]: W0813 20:44:18.397666    6097 pod_container_deletor.go:75] Container "5a62bb610d1a4624ef1eddfba319c12edd096eda2919fa175f5a7990da02b649" not found in pod's containers
	
	* 
	* ==> storage-provisioner [13ae50634abed391ea0b59a4fbb1d4ac9d54b5ea6f41c4cd86a067035e4f9a74] <==
	* I0813 20:42:55.950360       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:42:55.972246       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:42:55.972431       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:42:55.986275       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:42:55.987437       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3402f8a6-b697-4bf4-9d09-7bcfa6744235", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test-preload-20210813204102-30853_bfc63aa1-5a6a-4162-b7df-c04f6ddaf7a0 became leader
	I0813 20:42:55.988162       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_test-preload-20210813204102-30853_bfc63aa1-5a6a-4162-b7df-c04f6ddaf7a0!
	I0813 20:42:56.089093       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_test-preload-20210813204102-30853_bfc63aa1-5a6a-4162-b7df-c04f6ddaf7a0!
	
	* 
	* ==> storage-provisioner [c95d1a6870f0a39dbd231d57d2c3ca123a182f7f0a6c8e350abf9ebd0ad3697b] <==
	* I0813 20:44:16.143838       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:44:16.180787       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:44:16.181538       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-20210813204102-30853 -n test-preload-20210813204102-30853
helpers_test.go:262: (dbg) Run:  kubectl --context test-preload-20210813204102-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPreload]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context test-preload-20210813204102-30853 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context test-preload-20210813204102-30853 describe pod : exit status 1 (46.777144ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context test-preload-20210813204102-30853 describe pod : exit status 1
helpers_test.go:176: Cleaning up "test-preload-20210813204102-30853" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210813204102-30853
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210813204102-30853: (1.184057182s)
--- FAIL: TestPreload (203.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (902.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813204600-30853 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0813 20:46:07.715043   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:46:53.558734   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813204600-30853 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m42.423604424s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210813204600-30853
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210813204600-30853: (2.280554926s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210813204600-30853 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210813204600-30853 status --format={{.Host}}: exit status 7 (83.011673ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813204600-30853 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813204600-30853 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (13m14.982001663s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210813204600-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the kvm2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-20210813204600-30853 in cluster kubernetes-upgrade-20210813204600-30853
	* Restarting existing kvm2 VM for "kubernetes-upgrade-20210813204600-30853" ...
	* Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:47:45.040344    2943 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:47:45.040479    2943 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:47:45.040501    2943 out.go:311] Setting ErrFile to fd 2...
	I0813 20:47:45.040512    2943 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:47:45.040726    2943 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:47:45.041119    2943 out.go:305] Setting JSON to false
	I0813 20:47:45.083871    2943 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":9027,"bootTime":1628878638,"procs":181,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:47:45.083977    2943 start.go:121] virtualization: kvm guest
	I0813 20:47:45.086427    2943 out.go:177] * [kubernetes-upgrade-20210813204600-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:47:45.086532    2943 notify.go:169] Checking for updates...
	I0813 20:47:45.087976    2943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:47:45.089424    2943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:47:45.090928    2943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:47:45.092265    2943 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:47:45.092812    2943 config.go:177] Loaded profile config "kubernetes-upgrade-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:47:45.093435    2943 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:47:45.093502    2943 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:47:45.106128    2943 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35607
	I0813 20:47:45.106559    2943 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:47:45.107179    2943 main.go:130] libmachine: Using API Version  1
	I0813 20:47:45.107208    2943 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:47:45.107654    2943 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:47:45.107839    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .DriverName
	I0813 20:47:45.108100    2943 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:47:45.108564    2943 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:47:45.108603    2943 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:47:45.123041    2943 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0813 20:47:45.123586    2943 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:47:45.124159    2943 main.go:130] libmachine: Using API Version  1
	I0813 20:47:45.124187    2943 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:47:45.124530    2943 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:47:45.124709    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .DriverName
	I0813 20:47:45.162247    2943 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 20:47:45.162276    2943 start.go:278] selected driver: kvm2
	I0813 20:47:45.162283    2943 start.go:751] validating driver "kvm2" against &{Name:kubernetes-upgrade-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.24 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:47:45.162435    2943 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:47:45.163795    2943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:47:45.163949    2943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:47:45.178040    2943 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:47:45.178462    2943 cni.go:93] Creating CNI manager for ""
	I0813 20:47:45.178481    2943 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:47:45.178491    2943 start_flags.go:277] config:
	{Name:kubernetes-upgrade-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210813204600
-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.24 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:47:45.178641    2943 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:47:45.180729    2943 out.go:177] * Starting control plane node kubernetes-upgrade-20210813204600-30853 in cluster kubernetes-upgrade-20210813204600-30853
	I0813 20:47:45.180788    2943 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:47:45.180826    2943 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:47:45.180840    2943 cache.go:56] Caching tarball of preloaded images
	I0813 20:47:45.180994    2943 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:47:45.181016    2943 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0813 20:47:45.181153    2943 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204600-30853/config.json ...
	I0813 20:47:45.181346    2943 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:47:45.181379    2943 start.go:313] acquiring machines lock for kubernetes-upgrade-20210813204600-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:47:45.181452    2943 start.go:317] acquired machines lock for "kubernetes-upgrade-20210813204600-30853" in 51.165µs
	I0813 20:47:45.181472    2943 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:47:45.181486    2943 fix.go:55] fixHost starting: 
	I0813 20:47:45.181885    2943 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:47:45.181935    2943 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:47:45.192897    2943 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44683
	I0813 20:47:45.193354    2943 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:47:45.193971    2943 main.go:130] libmachine: Using API Version  1
	I0813 20:47:45.193994    2943 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:47:45.194404    2943 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:47:45.194621    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .DriverName
	I0813 20:47:45.194821    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetState
	I0813 20:47:45.198317    2943 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210813204600-30853: state=Stopped err=<nil>
	I0813 20:47:45.198348    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .DriverName
	W0813 20:47:45.198497    2943 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:47:45.200556    2943 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-20210813204600-30853" ...
	I0813 20:47:45.200590    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .Start
	I0813 20:47:45.200742    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Ensuring networks are active...
	I0813 20:47:45.203063    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Ensuring network default is active
	I0813 20:47:45.203421    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Ensuring network mk-kubernetes-upgrade-20210813204600-30853 is active
	I0813 20:47:45.203800    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Getting domain xml...
	I0813 20:47:45.205847    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Creating domain...
	I0813 20:47:45.660333    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Waiting to get IP...
	I0813 20:47:45.661656    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:47:45.662229    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has current primary IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:47:45.662262    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Found IP for machine: 192.168.50.24
	I0813 20:47:45.662281    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Reserving static IP address...
	I0813 20:47:45.662776    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-20210813204600-30853", mac: "52:54:00:b4:0f:21", ip: "192.168.50.24"} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:46:34 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:47:45.662814    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Reserved static IP address: 192.168.50.24
	I0813 20:47:45.662841    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | skip adding static IP to network mk-kubernetes-upgrade-20210813204600-30853 - found existing host DHCP lease matching {name: "kubernetes-upgrade-20210813204600-30853", mac: "52:54:00:b4:0f:21", ip: "192.168.50.24"}
	I0813 20:47:45.662881    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | Getting to WaitForSSH function...
	I0813 20:47:45.662903    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Waiting for SSH to be available...
	I0813 20:47:45.668934    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:47:45.669334    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:46:34 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:47:45.669370    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:47:45.669476    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | Using SSH client type: external
	I0813 20:47:45.669505    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204600-30853/id_rsa (-rw-------)
	I0813 20:47:45.669551    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.24 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204600-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 20:47:45.669581    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | About to run SSH command:
	I0813 20:47:45.669595    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | exit 0
	I0813 20:48:00.727749    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | SSH cmd err, output: exit status 255: 
	I0813 20:48:00.727783    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0813 20:48:00.727796    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | command : exit 0
	I0813 20:48:00.727806    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | err     : exit status 255
	I0813 20:48:00.727819    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | output  : 
	I0813 20:48:03.728441    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | Getting to WaitForSSH function...
	I0813 20:48:03.734375    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:03.734838    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:47:58 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:48:03.734881    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:03.735168    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | Using SSH client type: external
	I0813 20:48:03.735198    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204600-30853/id_rsa (-rw-------)
	I0813 20:48:03.735237    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.24 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204600-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 20:48:03.735248    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | About to run SSH command:
	I0813 20:48:03.735260    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | exit 0
	I0813 20:48:03.883881    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 20:48:03.884313    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetConfigRaw
	I0813 20:48:03.885194    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetIP
	I0813 20:48:03.891945    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:03.892378    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:47:58 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:48:03.892405    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:03.892764    2943 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204600-30853/config.json ...
	I0813 20:48:03.892980    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .DriverName
	I0813 20:48:03.893183    2943 machine.go:88] provisioning docker machine ...
	I0813 20:48:03.893210    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .DriverName
	I0813 20:48:03.893413    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetMachineName
	I0813 20:48:03.893553    2943 buildroot.go:166] provisioning hostname "kubernetes-upgrade-20210813204600-30853"
	I0813 20:48:03.893577    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetMachineName
	I0813 20:48:03.893732    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:48:03.899094    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:03.899464    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:47:58 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:48:03.899491    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:03.899700    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHPort
	I0813 20:48:03.899838    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:48:03.899944    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:48:03.900090    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:48:03.900240    2943 main.go:130] libmachine: Using SSH client type: native
	I0813 20:48:03.900437    2943 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.24 22 <nil> <nil>}
	I0813 20:48:03.900458    2943 main.go:130] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20210813204600-30853 && echo "kubernetes-upgrade-20210813204600-30853" | sudo tee /etc/hostname
	I0813 20:48:04.062530    2943 main.go:130] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210813204600-30853
	
	I0813 20:48:04.062568    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:48:04.069336    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:04.069840    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:47:58 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:48:04.069912    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:04.070006    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHPort
	I0813 20:48:04.070290    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:48:04.070512    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:48:04.070761    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:48:04.070968    2943 main.go:130] libmachine: Using SSH client type: native
	I0813 20:48:04.071173    2943 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.24 22 <nil> <nil>}
	I0813 20:48:04.071205    2943 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20210813204600-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210813204600-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20210813204600-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:48:04.226167    2943 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:48:04.226203    2943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:48:04.226272    2943 buildroot.go:174] setting up certificates
	I0813 20:48:04.226287    2943 provision.go:83] configureAuth start
	I0813 20:48:04.226305    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetMachineName
	I0813 20:48:04.226610    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetIP
	I0813 20:48:04.233458    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:04.233943    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:47:58 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:48:04.233989    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:04.234210    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:48:04.239769    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:04.240181    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:47:58 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:48:04.240216    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:04.240466    2943 provision.go:138] copyHostCerts
	I0813 20:48:04.240542    2943 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:48:04.240555    2943 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:48:04.240612    2943 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:48:04.240715    2943 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:48:04.240726    2943 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:48:04.240753    2943 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:48:04.240833    2943 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:48:04.240843    2943 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:48:04.240867    2943 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:48:04.240920    2943 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20210813204600-30853 san=[192.168.50.24 192.168.50.24 localhost 127.0.0.1 minikube kubernetes-upgrade-20210813204600-30853]
	I0813 20:48:04.325820    2943 provision.go:172] copyRemoteCerts
	I0813 20:48:04.325892    2943 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:48:04.325928    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:48:04.332424    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:04.332874    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:47:58 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:48:04.332909    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:04.333243    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHPort
	I0813 20:48:04.333426    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:48:04.333582    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:48:04.333752    2943 sshutil.go:53] new ssh client: &{IP:192.168.50.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204600-30853/id_rsa Username:docker}
	I0813 20:48:04.456389    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:48:04.480906    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0813 20:48:04.503824    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:48:04.528752    2943 provision.go:86] duration metric: configureAuth took 302.448179ms
	I0813 20:48:04.528781    2943 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:48:04.528973    2943 config.go:177] Loaded profile config "kubernetes-upgrade-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:48:04.529104    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:48:04.535323    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:04.535650    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:47:58 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:48:04.535686    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:04.535996    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHPort
	I0813 20:48:04.536211    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:48:04.536374    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:48:04.536535    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:48:04.536687    2943 main.go:130] libmachine: Using SSH client type: native
	I0813 20:48:04.536875    2943 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.24 22 <nil> <nil>}
	I0813 20:48:04.536897    2943 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:48:05.182165    2943 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:48:05.182203    2943 machine.go:91] provisioned docker machine in 1.289001605s
	I0813 20:48:05.182218    2943 start.go:267] post-start starting for "kubernetes-upgrade-20210813204600-30853" (driver="kvm2")
	I0813 20:48:05.182227    2943 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:48:05.182251    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .DriverName
	I0813 20:48:05.182559    2943 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:48:05.182599    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:48:05.188894    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:05.189340    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:47:58 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:48:05.189369    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:05.189692    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHPort
	I0813 20:48:05.189858    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:48:05.190008    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:48:05.190162    2943 sshutil.go:53] new ssh client: &{IP:192.168.50.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204600-30853/id_rsa Username:docker}
	I0813 20:48:05.289200    2943 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:48:05.294218    2943 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:48:05.294245    2943 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:48:05.294322    2943 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:48:05.294455    2943 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 20:48:05.294571    2943 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:48:05.301989    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:48:05.320367    2943 start.go:270] post-start completed in 138.133168ms
	I0813 20:48:05.320421    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .DriverName
	I0813 20:48:05.320694    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:48:05.326865    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:05.327301    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:47:58 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:48:05.327326    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:05.327542    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHPort
	I0813 20:48:05.327745    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:48:05.327919    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:48:05.328114    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:48:05.328280    2943 main.go:130] libmachine: Using SSH client type: native
	I0813 20:48:05.328473    2943 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.24 22 <nil> <nil>}
	I0813 20:48:05.328489    2943 main.go:130] libmachine: About to run SSH command:
	date +%s.%N
	I0813 20:48:05.478392    2943 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628887685.362041730
	
	I0813 20:48:05.478422    2943 fix.go:212] guest clock: 1628887685.362041730
	I0813 20:48:05.478434    2943 fix.go:225] Guest: 2021-08-13 20:48:05.36204173 +0000 UTC Remote: 2021-08-13 20:48:05.320671878 +0000 UTC m=+20.345442521 (delta=41.369852ms)
	I0813 20:48:05.478462    2943 fix.go:196] guest clock delta is within tolerance: 41.369852ms
	I0813 20:48:05.478471    2943 fix.go:57] fixHost completed within 20.296989242s
	I0813 20:48:05.478479    2943 start.go:80] releasing machines lock for "kubernetes-upgrade-20210813204600-30853", held for 20.297014918s
	I0813 20:48:05.478529    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .DriverName
	I0813 20:48:05.478834    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetIP
	I0813 20:48:05.485176    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:05.485672    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:47:58 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:48:05.485710    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:05.486012    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .DriverName
	I0813 20:48:05.486162    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .DriverName
	I0813 20:48:05.486828    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .DriverName
	I0813 20:48:05.487131    2943 ssh_runner.go:149] Run: systemctl --version
	I0813 20:48:05.487190    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:48:05.487142    2943 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:48:05.487299    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:48:05.494603    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:05.495107    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:05.495558    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:47:58 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:48:05.495660    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:05.495824    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHPort
	I0813 20:48:05.495918    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:47:58 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:48:05.495968    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:05.496022    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:48:05.496158    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHPort
	I0813 20:48:05.496191    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:48:05.496353    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:48:05.496581    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:48:05.496624    2943 sshutil.go:53] new ssh client: &{IP:192.168.50.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204600-30853/id_rsa Username:docker}
	I0813 20:48:05.496716    2943 sshutil.go:53] new ssh client: &{IP:192.168.50.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204600-30853/id_rsa Username:docker}
	I0813 20:48:05.607678    2943 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:48:05.607815    2943 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:48:09.669323    2943 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.061482185s)
	I0813 20:48:09.669540    2943 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 20:48:09.669608    2943 ssh_runner.go:149] Run: which lz4
	I0813 20:48:09.675552    2943 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0813 20:48:09.681436    2943 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:48:09.681471    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (590981257 bytes)
	I0813 20:48:11.878632    2943 crio.go:362] Took 2.203113 seconds to copy over tarball
	I0813 20:48:11.878724    2943 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 20:48:19.279802    2943 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (7.401047718s)
	I0813 20:48:19.279838    2943 crio.go:369] Took 7.401169 seconds t extract the tarball
	I0813 20:48:19.279850    2943 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 20:48:19.322406    2943 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:48:19.335980    2943 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:48:19.347211    2943 docker.go:153] disabling docker service ...
	I0813 20:48:19.347279    2943 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:48:19.358405    2943 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:48:19.368787    2943 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:48:19.510595    2943 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:48:19.667996    2943 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:48:19.680840    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:48:19.698528    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:48:19.709270    2943 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:48:19.718107    2943 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:48:19.718170    2943 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:48:19.735845    2943 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:48:19.744150    2943 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:48:19.889975    2943 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:48:20.218598    2943 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:48:20.218689    2943 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:48:20.228894    2943 start.go:413] Will wait 60s for crictl version
	I0813 20:48:20.228968    2943 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:48:20.268666    2943 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 20:48:20.268770    2943 ssh_runner.go:149] Run: crio --version
	I0813 20:48:20.428423    2943 ssh_runner.go:149] Run: crio --version
	I0813 20:48:21.334503    2943 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.2 ...
	I0813 20:48:21.334561    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) Calling .GetIP
	I0813 20:48:21.340911    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:21.341272    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:21", ip: ""} in network mk-kubernetes-upgrade-20210813204600-30853: {Iface:virbr2 ExpiryTime:2021-08-13 21:47:58 +0000 UTC Type:0 Mac:52:54:00:b4:0f:21 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:kubernetes-upgrade-20210813204600-30853 Clientid:01:52:54:00:b4:0f:21}
	I0813 20:48:21.341307    2943 main.go:130] libmachine: (kubernetes-upgrade-20210813204600-30853) DBG | domain kubernetes-upgrade-20210813204600-30853 has defined IP address 192.168.50.24 and MAC address 52:54:00:b4:0f:21 in network mk-kubernetes-upgrade-20210813204600-30853
	I0813 20:48:21.341540    2943 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0813 20:48:21.347143    2943 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:48:21.376593    2943 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:48:21.376756    2943 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:48:21.439029    2943 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:48:21.439057    2943 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:48:21.439112    2943 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:48:21.477648    2943 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:48:21.477679    2943 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:48:21.477755    2943 ssh_runner.go:149] Run: crio config
	I0813 20:48:21.600165    2943 cni.go:93] Creating CNI manager for ""
	I0813 20:48:21.600194    2943 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:48:21.600208    2943 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:48:21.600227    2943 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.24 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20210813204600-30853 NodeName:kubernetes-upgrade-20210813204600-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.24 CgroupDriver:
systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:48:21.600432    2943 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-20210813204600-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:48:21.600543    2943 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-20210813204600-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.24 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:48:21.600611    2943 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 20:48:21.608738    2943 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:48:21.608816    2943 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:48:21.616796    2943 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (524 bytes)
	I0813 20:48:21.632276    2943 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 20:48:21.648058    2943 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2088 bytes)
	I0813 20:48:21.665276    2943 ssh_runner.go:149] Run: grep 192.168.50.24	control-plane.minikube.internal$ /etc/hosts
	I0813 20:48:21.670015    2943 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:48:21.680430    2943 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204600-30853 for IP: 192.168.50.24
	I0813 20:48:21.680492    2943 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:48:21.680513    2943 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:48:21.680598    2943 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204600-30853/client.key
	I0813 20:48:21.680624    2943 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204600-30853/apiserver.key.a477c9a8
	I0813 20:48:21.680647    2943 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204600-30853/proxy-client.key
	I0813 20:48:21.680773    2943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 20:48:21.680824    2943 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 20:48:21.680839    2943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:48:21.680877    2943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:48:21.680906    2943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:48:21.680937    2943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:48:21.680992    2943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:48:21.682252    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204600-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:48:21.703616    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204600-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:48:21.723140    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204600-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:48:21.742000    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204600-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:48:21.762965    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:48:21.782025    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:48:21.802254    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:48:21.819442    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:48:21.838163    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 20:48:21.857875    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:48:21.877689    2943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 20:48:21.896001    2943 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:48:21.912106    2943 ssh_runner.go:149] Run: openssl version
	I0813 20:48:21.917889    2943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:48:21.925956    2943 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:48:21.930872    2943 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:48:21.930921    2943 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:48:21.937509    2943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:48:21.945273    2943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 20:48:21.953383    2943 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 20:48:21.958308    2943 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:48:21.958342    2943 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 20:48:21.964096    2943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 20:48:21.973058    2943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 20:48:21.983426    2943 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 20:48:21.988323    2943 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:48:21.988386    2943 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 20:48:21.994489    2943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:48:22.003239    2943 kubeadm.go:390] StartCluster: {Name:kubernetes-upgrade-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.24 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:48:22.003339    2943 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:48:22.003399    2943 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:48:22.041669    2943 cri.go:76] found id: ""
	I0813 20:48:22.041743    2943 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:48:22.050408    2943 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:48:22.050443    2943 kubeadm.go:600] restartCluster start
	I0813 20:48:22.050497    2943 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:48:22.059778    2943 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:22.060559    2943 kubeconfig.go:117] verify returned: extract IP: "kubernetes-upgrade-20210813204600-30853" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:48:22.060844    2943 kubeconfig.go:128] "kubernetes-upgrade-20210813204600-30853" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 20:48:22.061396    2943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:48:22.062122    2943 kapi.go:59] client config for kubernetes-upgrade-20210813204600-30853: &rest.Config{Host:"https://192.168.50.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes
-upgrade-20210813204600-30853/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:48:22.063944    2943 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:48:22.072852    2943 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.50.24
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.50.24
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.50.24"]
	@@ -31,7 +31,7 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-20210813204600-30853
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 dns:
	   type: CoreDNS
	@@ -39,8 +39,8 @@
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.24:2381
	-kubernetesVersion: v1.14.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.22.0-rc.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0813 20:48:22.072872    2943 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:48:22.072885    2943 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:48:22.072918    2943 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:48:22.108110    2943 cri.go:76] found id: ""
	I0813 20:48:22.108178    2943 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:48:22.127856    2943 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:48:22.138243    2943 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:48:22.138305    2943 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:48:22.150356    2943 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:48:22.150410    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:22.348982    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:22.981761    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:23.293998    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:23.434206    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:23.548732    2943 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:48:23.548821    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:24.063637    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:24.564201    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:25.064068    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:25.563995    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:26.063754    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:26.564079    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:27.063554    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:27.563856    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:28.063220    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:28.563831    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:29.064226    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:29.563463    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:30.063805    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:30.563207    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:31.064254    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:31.563314    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:32.063790    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:32.563445    2943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:32.576764    2943 api_server.go:70] duration metric: took 9.028033752s to wait for apiserver process to appear ...
	I0813 20:48:32.576792    2943 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:48:32.576804    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:48:37.578748    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:48:38.079493    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:48:43.080158    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:48:43.579815    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:48:48.580683    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:48:49.079227    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:48:52.675008    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": read tcp 192.168.50.1:40728->192.168.50.24:8443: read: connection reset by peer
	I0813 20:48:53.079567    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:48:53.080305    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:48:53.579926    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:48:53.580867    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:48:54.078940    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:48:59.080218    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:48:59.578904    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:04.579804    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:49:05.079802    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:10.080830    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:49:10.579566    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:14.540519    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": read tcp 192.168.50.1:40792->192.168.50.24:8443: read: connection reset by peer
	I0813 20:49:14.579739    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:14.580451    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:15.079298    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:15.079947    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:15.579678    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:15.580450    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:16.078922    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:16.079480    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:16.578921    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:16.579558    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:17.079061    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:17.079634    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:17.578938    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:17.579564    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:18.078941    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:18.079479    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:18.579014    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:18.579747    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:19.078958    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:19.079711    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:19.578954    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:19.579634    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:20.079244    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:20.079886    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:20.579532    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:20.580176    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:21.079797    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:21.080567    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:21.578936    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:21.579550    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:22.079538    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:22.080490    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:22.578975    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:22.579658    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:23.079124    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:23.079812    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:23.579375    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:23.580065    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:24.079644    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:24.080385    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:24.578980    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:24.579783    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:25.079466    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:25.080219    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:25.579888    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:25.580575    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:26.078960    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:26.079729    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:26.579276    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:26.580017    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:27.079594    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:27.080267    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:27.579936    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:27.580639    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:28.078928    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:28.079468    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:28.579009    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:28.579597    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:29.079134    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:29.079742    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:29.578947    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:29.579539    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:30.079834    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:30.080562    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:30.579239    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:30.580091    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:31.079688    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:31.080342    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:31.579867    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:36.580663    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:49:37.078932    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:49:37.079026    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:49:37.114047    2943 cri.go:76] found id: "0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44"
	I0813 20:49:37.114069    2943 cri.go:76] found id: "4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea"
	I0813 20:49:37.114074    2943 cri.go:76] found id: ""
	I0813 20:49:37.114079    2943 logs.go:270] 2 containers: [0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44 4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea]
	I0813 20:49:37.114122    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:49:37.118580    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:49:37.122878    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:49:37.122929    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:49:37.157362    2943 cri.go:76] found id: ""
	I0813 20:49:37.157383    2943 logs.go:270] 0 containers: []
	W0813 20:49:37.157390    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:49:37.157397    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:49:37.157450    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:49:37.194140    2943 cri.go:76] found id: ""
	I0813 20:49:37.194162    2943 logs.go:270] 0 containers: []
	W0813 20:49:37.194173    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:49:37.194182    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:49:37.194237    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:49:37.226366    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:49:37.226392    2943 cri.go:76] found id: ""
	I0813 20:49:37.226400    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:49:37.226447    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:49:37.230528    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:49:37.230585    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:49:37.265787    2943 cri.go:76] found id: ""
	I0813 20:49:37.265812    2943 logs.go:270] 0 containers: []
	W0813 20:49:37.265817    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:49:37.265824    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:49:37.265871    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:49:37.301953    2943 cri.go:76] found id: ""
	I0813 20:49:37.301978    2943 logs.go:270] 0 containers: []
	W0813 20:49:37.301988    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:49:37.301996    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:49:37.302058    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:49:37.337273    2943 cri.go:76] found id: ""
	I0813 20:49:37.337295    2943 logs.go:270] 0 containers: []
	W0813 20:49:37.337302    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:49:37.337310    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:49:37.337362    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:49:37.375327    2943 cri.go:76] found id: "113f1181586e21c0ad2243440a35a52929130b4cd7253f256a8102a469817082"
	I0813 20:49:37.375352    2943 cri.go:76] found id: ""
	I0813 20:49:37.375360    2943 logs.go:270] 1 containers: [113f1181586e21c0ad2243440a35a52929130b4cd7253f256a8102a469817082]
	I0813 20:49:37.375409    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:49:37.379585    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:49:37.379616    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:49:37.440906    2943 logs.go:123] Gathering logs for kube-controller-manager [113f1181586e21c0ad2243440a35a52929130b4cd7253f256a8102a469817082] ...
	I0813 20:49:37.440938    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 113f1181586e21c0ad2243440a35a52929130b4cd7253f256a8102a469817082"
	I0813 20:49:37.497458    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:49:37.497485    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:49:37.539801    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:49:37.539851    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:49:37.755301    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:49:37.755343    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:49:37.768137    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:49:37.768164    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 20:49:53.028811    2943 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (15.260625413s)
	W0813 20:49:53.028857    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:49:53.028867    2943 logs.go:123] Gathering logs for kube-apiserver [0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44] ...
	I0813 20:49:53.028877    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44"
	I0813 20:49:53.088977    2943 logs.go:123] Gathering logs for kube-apiserver [4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea] ...
	I0813 20:49:53.089020    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea"
	W0813 20:49:53.130498    2943 logs.go:130] failed kube-apiserver [4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea]: command: /bin/bash -c "sudo /bin/crictl logs --tail 400 4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea" /bin/bash -c "sudo /bin/crictl logs --tail 400 4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea": Process exited with status 1
	stdout:
	
	stderr:
	E0813 20:49:53.128982    3022 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea\": container with ID starting with 4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea not found: ID does not exist" containerID="4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea"
	time="2021-08-13T20:49:53Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea\": container with ID starting with 4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea not found: ID does not exist"
	 output: 
	** stderr ** 
	E0813 20:49:53.128982    3022 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea\": container with ID starting with 4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea not found: ID does not exist" containerID="4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea"
	time="2021-08-13T20:49:53Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea\": container with ID starting with 4f9e965d30aefaf975628e45a8b7bc0ef3d4078734ff9380f442769874b4aeea not found: ID does not exist"
	
	** /stderr **
	I0813 20:49:53.130531    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:49:53.130546    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:49:55.695886    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:55.696638    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:56.078940    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:49:56.079038    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:49:56.115197    2943 cri.go:76] found id: "0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44"
	I0813 20:49:56.115225    2943 cri.go:76] found id: ""
	I0813 20:49:56.115234    2943 logs.go:270] 1 containers: [0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44]
	I0813 20:49:56.115302    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:49:56.119793    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:49:56.119862    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:49:56.156948    2943 cri.go:76] found id: ""
	I0813 20:49:56.156977    2943 logs.go:270] 0 containers: []
	W0813 20:49:56.156985    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:49:56.156993    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:49:56.157057    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:49:56.192914    2943 cri.go:76] found id: ""
	I0813 20:49:56.192942    2943 logs.go:270] 0 containers: []
	W0813 20:49:56.192951    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:49:56.192959    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:49:56.193026    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:49:56.231630    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:49:56.231660    2943 cri.go:76] found id: ""
	I0813 20:49:56.231673    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:49:56.231743    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:49:56.236420    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:49:56.236506    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:49:56.271518    2943 cri.go:76] found id: ""
	I0813 20:49:56.271547    2943 logs.go:270] 0 containers: []
	W0813 20:49:56.271558    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:49:56.271566    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:49:56.271631    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:49:56.313544    2943 cri.go:76] found id: ""
	I0813 20:49:56.313577    2943 logs.go:270] 0 containers: []
	W0813 20:49:56.313589    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:49:56.313601    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:49:56.313683    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:49:56.350389    2943 cri.go:76] found id: ""
	I0813 20:49:56.350422    2943 logs.go:270] 0 containers: []
	W0813 20:49:56.350431    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:49:56.350440    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:49:56.350505    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:49:56.390989    2943 cri.go:76] found id: "2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:49:56.391022    2943 cri.go:76] found id: "113f1181586e21c0ad2243440a35a52929130b4cd7253f256a8102a469817082"
	I0813 20:49:56.391030    2943 cri.go:76] found id: ""
	I0813 20:49:56.391037    2943 logs.go:270] 2 containers: [2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996 113f1181586e21c0ad2243440a35a52929130b4cd7253f256a8102a469817082]
	I0813 20:49:56.391098    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:49:56.396432    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:49:56.402203    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:49:56.402228    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:49:56.415189    2943 logs.go:123] Gathering logs for kube-apiserver [0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44] ...
	I0813 20:49:56.415227    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44"
	I0813 20:49:56.459436    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:49:56.459466    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:49:56.526704    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:49:56.526747    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:49:56.577884    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:49:56.577928    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:49:56.678237    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:49:56.678278    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:49:56.775774    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:49:56.775806    2943 logs.go:123] Gathering logs for kube-controller-manager [2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996] ...
	I0813 20:49:56.775823    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:49:56.828599    2943 logs.go:123] Gathering logs for kube-controller-manager [113f1181586e21c0ad2243440a35a52929130b4cd7253f256a8102a469817082] ...
	I0813 20:49:56.828638    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 113f1181586e21c0ad2243440a35a52929130b4cd7253f256a8102a469817082"
	I0813 20:49:56.892992    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:49:56.893037    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:49:59.697817    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:59.698559    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:50:00.078912    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:50:00.079003    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:50:00.123834    2943 cri.go:76] found id: "0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44"
	I0813 20:50:00.123868    2943 cri.go:76] found id: ""
	I0813 20:50:00.123876    2943 logs.go:270] 1 containers: [0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44]
	I0813 20:50:00.123938    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:00.130151    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:50:00.130217    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:50:00.173460    2943 cri.go:76] found id: ""
	I0813 20:50:00.173488    2943 logs.go:270] 0 containers: []
	W0813 20:50:00.173496    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:50:00.173504    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:50:00.173572    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:50:00.214341    2943 cri.go:76] found id: ""
	I0813 20:50:00.214378    2943 logs.go:270] 0 containers: []
	W0813 20:50:00.214386    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:50:00.214393    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:50:00.214457    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:50:00.264200    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:00.264230    2943 cri.go:76] found id: ""
	I0813 20:50:00.264238    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:50:00.264297    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:00.271161    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:50:00.271226    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:50:00.324975    2943 cri.go:76] found id: ""
	I0813 20:50:00.325006    2943 logs.go:270] 0 containers: []
	W0813 20:50:00.325015    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:50:00.325024    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:50:00.325100    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:50:00.372978    2943 cri.go:76] found id: ""
	I0813 20:50:00.373017    2943 logs.go:270] 0 containers: []
	W0813 20:50:00.373028    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:50:00.373039    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:50:00.373110    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:50:00.418439    2943 cri.go:76] found id: ""
	I0813 20:50:00.418473    2943 logs.go:270] 0 containers: []
	W0813 20:50:00.418481    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:50:00.418491    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:50:00.418551    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:50:00.459370    2943 cri.go:76] found id: "2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:50:00.459403    2943 cri.go:76] found id: "113f1181586e21c0ad2243440a35a52929130b4cd7253f256a8102a469817082"
	I0813 20:50:00.459411    2943 cri.go:76] found id: ""
	I0813 20:50:00.459419    2943 logs.go:270] 2 containers: [2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996 113f1181586e21c0ad2243440a35a52929130b4cd7253f256a8102a469817082]
	I0813 20:50:00.459481    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:00.466067    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:00.471480    2943 logs.go:123] Gathering logs for kube-controller-manager [113f1181586e21c0ad2243440a35a52929130b4cd7253f256a8102a469817082] ...
	I0813 20:50:00.471503    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 113f1181586e21c0ad2243440a35a52929130b4cd7253f256a8102a469817082"
	I0813 20:50:00.544956    2943 logs.go:123] Gathering logs for kube-apiserver [0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44] ...
	I0813 20:50:00.545048    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44"
	I0813 20:50:00.606351    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:50:00.606392    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:00.684730    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:50:00.684822    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:50:00.794400    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:50:00.794426    2943 logs.go:123] Gathering logs for kube-controller-manager [2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996] ...
	I0813 20:50:00.794441    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:50:00.842766    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:50:00.842818    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:50:01.231989    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:50:01.232047    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:50:01.292488    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:50:01.292528    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:50:01.385754    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:50:01.385799    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:50:03.903314    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:50:03.903922    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:50:04.079258    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:50:04.079358    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:50:04.132264    2943 cri.go:76] found id: "0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44"
	I0813 20:50:04.132299    2943 cri.go:76] found id: ""
	I0813 20:50:04.132309    2943 logs.go:270] 1 containers: [0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44]
	I0813 20:50:04.132370    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:04.138277    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:50:04.138352    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:50:04.178892    2943 cri.go:76] found id: ""
	I0813 20:50:04.178917    2943 logs.go:270] 0 containers: []
	W0813 20:50:04.178925    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:50:04.178933    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:50:04.178993    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:50:04.238772    2943 cri.go:76] found id: ""
	I0813 20:50:04.238798    2943 logs.go:270] 0 containers: []
	W0813 20:50:04.238806    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:50:04.238814    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:50:04.238893    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:50:04.292869    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:04.292896    2943 cri.go:76] found id: ""
	I0813 20:50:04.292905    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:50:04.292957    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:04.299700    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:50:04.299762    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:50:04.346602    2943 cri.go:76] found id: ""
	I0813 20:50:04.346625    2943 logs.go:270] 0 containers: []
	W0813 20:50:04.346633    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:50:04.346641    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:50:04.346695    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:50:04.396251    2943 cri.go:76] found id: ""
	I0813 20:50:04.396279    2943 logs.go:270] 0 containers: []
	W0813 20:50:04.396287    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:50:04.396296    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:50:04.396356    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:50:04.439375    2943 cri.go:76] found id: ""
	I0813 20:50:04.439403    2943 logs.go:270] 0 containers: []
	W0813 20:50:04.439411    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:50:04.439419    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:50:04.439479    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:50:04.479487    2943 cri.go:76] found id: "2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:50:04.479515    2943 cri.go:76] found id: ""
	I0813 20:50:04.479523    2943 logs.go:270] 1 containers: [2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996]
	I0813 20:50:04.479584    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:04.485325    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:50:04.485354    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:50:04.539676    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:50:04.539708    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:50:04.621418    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:50:04.621456    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:50:04.635527    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:50:04.635562    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:50:04.716260    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:50:04.716286    2943 logs.go:123] Gathering logs for kube-apiserver [0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44] ...
	I0813 20:50:04.716304    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44"
	I0813 20:50:04.767707    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:50:04.767741    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:04.838353    2943 logs.go:123] Gathering logs for kube-controller-manager [2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996] ...
	I0813 20:50:04.838381    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:50:04.909302    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:50:04.909334    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:50:07.713409    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:50:07.714094    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:50:08.079574    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:50:08.079654    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:50:08.118696    2943 cri.go:76] found id: "0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44"
	I0813 20:50:08.118731    2943 cri.go:76] found id: ""
	I0813 20:50:08.118740    2943 logs.go:270] 1 containers: [0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44]
	I0813 20:50:08.118804    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:08.123474    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:50:08.123547    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:50:08.160657    2943 cri.go:76] found id: ""
	I0813 20:50:08.160684    2943 logs.go:270] 0 containers: []
	W0813 20:50:08.160691    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:50:08.160697    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:50:08.160765    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:50:08.204071    2943 cri.go:76] found id: ""
	I0813 20:50:08.204099    2943 logs.go:270] 0 containers: []
	W0813 20:50:08.204108    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:50:08.204117    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:50:08.204182    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:50:08.247885    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:08.247916    2943 cri.go:76] found id: ""
	I0813 20:50:08.247926    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:50:08.247994    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:08.253079    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:50:08.253189    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:50:08.304005    2943 cri.go:76] found id: ""
	I0813 20:50:08.304034    2943 logs.go:270] 0 containers: []
	W0813 20:50:08.304042    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:50:08.304050    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:50:08.304109    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:50:08.347172    2943 cri.go:76] found id: ""
	I0813 20:50:08.347206    2943 logs.go:270] 0 containers: []
	W0813 20:50:08.347215    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:50:08.347224    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:50:08.347293    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:50:08.395616    2943 cri.go:76] found id: ""
	I0813 20:50:08.395645    2943 logs.go:270] 0 containers: []
	W0813 20:50:08.395653    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:50:08.395661    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:50:08.395720    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:50:08.438472    2943 cri.go:76] found id: "2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:50:08.438502    2943 cri.go:76] found id: ""
	I0813 20:50:08.438511    2943 logs.go:270] 1 containers: [2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996]
	I0813 20:50:08.438570    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:08.444139    2943 logs.go:123] Gathering logs for kube-controller-manager [2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996] ...
	I0813 20:50:08.444168    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:50:08.501130    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:50:08.501169    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:50:08.792869    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:50:08.792905    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:50:08.838964    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:50:08.838991    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:50:08.910626    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:50:08.910659    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:50:08.923337    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:50:08.923365    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:50:08.992027    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:50:08.992055    2943 logs.go:123] Gathering logs for kube-apiserver [0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44] ...
	I0813 20:50:08.992071    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44"
	I0813 20:50:09.036151    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:50:09.036183    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:11.594134    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:50:11.594760    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:50:12.079511    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:50:12.079596    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:50:12.115807    2943 cri.go:76] found id: "0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44"
	I0813 20:50:12.115834    2943 cri.go:76] found id: ""
	I0813 20:50:12.115842    2943 logs.go:270] 1 containers: [0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44]
	I0813 20:50:12.115907    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:12.120645    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:50:12.120715    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:50:12.151947    2943 cri.go:76] found id: ""
	I0813 20:50:12.151974    2943 logs.go:270] 0 containers: []
	W0813 20:50:12.151982    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:50:12.151989    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:50:12.152052    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:50:12.188661    2943 cri.go:76] found id: ""
	I0813 20:50:12.188686    2943 logs.go:270] 0 containers: []
	W0813 20:50:12.188694    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:50:12.188700    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:50:12.188759    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:50:12.227829    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:12.227859    2943 cri.go:76] found id: ""
	I0813 20:50:12.227867    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:50:12.227929    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:12.232284    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:50:12.232351    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:50:12.272690    2943 cri.go:76] found id: ""
	I0813 20:50:12.272723    2943 logs.go:270] 0 containers: []
	W0813 20:50:12.272731    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:50:12.272739    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:50:12.272799    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:50:12.311652    2943 cri.go:76] found id: ""
	I0813 20:50:12.311680    2943 logs.go:270] 0 containers: []
	W0813 20:50:12.311688    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:50:12.311697    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:50:12.311760    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:50:12.357313    2943 cri.go:76] found id: ""
	I0813 20:50:12.357337    2943 logs.go:270] 0 containers: []
	W0813 20:50:12.357344    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:50:12.357351    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:50:12.357411    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:50:12.390305    2943 cri.go:76] found id: "2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:50:12.390331    2943 cri.go:76] found id: ""
	I0813 20:50:12.390337    2943 logs.go:270] 1 containers: [2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996]
	I0813 20:50:12.390432    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:12.394950    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:50:12.394974    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:50:12.446911    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:50:12.446946    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:50:12.515386    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:50:12.515416    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:50:12.526083    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:50:12.526108    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:50:12.591805    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:50:12.591855    2943 logs.go:123] Gathering logs for kube-apiserver [0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44] ...
	I0813 20:50:12.591871    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44"
	I0813 20:50:12.632515    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:50:12.632552    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:12.687047    2943 logs.go:123] Gathering logs for kube-controller-manager [2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996] ...
	I0813 20:50:12.687084    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:50:12.745081    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:50:12.745113    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:50:15.507526    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:50:20.508303    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:50:20.579552    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:50:20.579655    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:50:20.625255    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:50:20.625290    2943 cri.go:76] found id: "0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44"
	I0813 20:50:20.625297    2943 cri.go:76] found id: ""
	I0813 20:50:20.625305    2943 logs.go:270] 2 containers: [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e 0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44]
	I0813 20:50:20.625367    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:20.633999    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:20.639415    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:50:20.639480    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:50:20.680913    2943 cri.go:76] found id: ""
	I0813 20:50:20.680960    2943 logs.go:270] 0 containers: []
	W0813 20:50:20.680969    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:50:20.680978    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:50:20.681047    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:50:20.721704    2943 cri.go:76] found id: ""
	I0813 20:50:20.721729    2943 logs.go:270] 0 containers: []
	W0813 20:50:20.721739    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:50:20.721747    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:50:20.721808    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:50:20.760890    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:20.760918    2943 cri.go:76] found id: ""
	I0813 20:50:20.760928    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:50:20.760988    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:20.766001    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:50:20.766073    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:50:20.802598    2943 cri.go:76] found id: ""
	I0813 20:50:20.802625    2943 logs.go:270] 0 containers: []
	W0813 20:50:20.802633    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:50:20.802641    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:50:20.802703    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:50:20.844521    2943 cri.go:76] found id: ""
	I0813 20:50:20.844554    2943 logs.go:270] 0 containers: []
	W0813 20:50:20.844564    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:50:20.844573    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:50:20.844644    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:50:20.896990    2943 cri.go:76] found id: ""
	I0813 20:50:20.897021    2943 logs.go:270] 0 containers: []
	W0813 20:50:20.897029    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:50:20.897038    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:50:20.897098    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:50:20.939783    2943 cri.go:76] found id: "2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:50:20.939812    2943 cri.go:76] found id: ""
	I0813 20:50:20.939820    2943 logs.go:270] 1 containers: [2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996]
	I0813 20:50:20.939879    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:20.945766    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:50:20.945797    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:50:21.021790    2943 logs.go:123] Gathering logs for kube-apiserver [0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44] ...
	I0813 20:50:21.021830    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d4fa2599991699ab6728c99906704b2b2c2ca6195e355b4e058ba10621e1f44"
	I0813 20:50:21.064748    2943 logs.go:123] Gathering logs for kube-controller-manager [2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996] ...
	I0813 20:50:21.064788    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:50:21.137219    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:50:21.137258    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:50:21.438300    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:50:21.438343    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:50:21.490297    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:50:21.490337    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:50:21.502670    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:50:21.502701    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 20:50:35.945546    2943 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.442813466s)
	W0813 20:50:35.945600    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:50:35.945620    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:50:35.945633    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:50:36.007290    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:50:36.007331    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:38.583093    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:50:38.583844    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:50:39.079392    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:50:39.079479    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:50:39.120891    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:50:39.120923    2943 cri.go:76] found id: ""
	I0813 20:50:39.120931    2943 logs.go:270] 1 containers: [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e]
	I0813 20:50:39.121000    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:39.125685    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:50:39.125746    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:50:39.161603    2943 cri.go:76] found id: ""
	I0813 20:50:39.161631    2943 logs.go:270] 0 containers: []
	W0813 20:50:39.161639    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:50:39.161646    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:50:39.161704    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:50:39.205447    2943 cri.go:76] found id: ""
	I0813 20:50:39.205480    2943 logs.go:270] 0 containers: []
	W0813 20:50:39.205497    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:50:39.205507    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:50:39.205588    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:50:39.255982    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:39.256031    2943 cri.go:76] found id: ""
	I0813 20:50:39.256041    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:50:39.256098    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:39.261719    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:50:39.261786    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:50:39.311098    2943 cri.go:76] found id: ""
	I0813 20:50:39.311128    2943 logs.go:270] 0 containers: []
	W0813 20:50:39.311136    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:50:39.311149    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:50:39.311210    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:50:39.354367    2943 cri.go:76] found id: ""
	I0813 20:50:39.354395    2943 logs.go:270] 0 containers: []
	W0813 20:50:39.354403    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:50:39.354412    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:50:39.354471    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:50:39.397337    2943 cri.go:76] found id: ""
	I0813 20:50:39.397371    2943 logs.go:270] 0 containers: []
	W0813 20:50:39.397380    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:50:39.397389    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:50:39.397449    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:50:39.436449    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:50:39.436479    2943 cri.go:76] found id: "2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:50:39.436486    2943 cri.go:76] found id: ""
	I0813 20:50:39.436494    2943 logs.go:270] 2 containers: [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5 2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996]
	I0813 20:50:39.436557    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:39.441752    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:39.446354    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:50:39.446375    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:39.520470    2943 logs.go:123] Gathering logs for kube-controller-manager [2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996] ...
	I0813 20:50:39.520515    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:50:39.589504    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:50:39.589546    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:50:39.641398    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:50:39.641435    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:50:39.921618    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:50:39.921656    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:50:40.026387    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:50:40.026432    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:50:40.042401    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:50:40.042441    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:50:40.128896    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:50:40.128924    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:50:40.128938    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:50:40.178997    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:50:40.179037    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:50:42.731291    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:50:42.732093    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:50:43.079664    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:50:43.079786    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:50:43.117707    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:50:43.117738    2943 cri.go:76] found id: ""
	I0813 20:50:43.117745    2943 logs.go:270] 1 containers: [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e]
	I0813 20:50:43.117805    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:43.122941    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:50:43.123019    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:50:43.163327    2943 cri.go:76] found id: ""
	I0813 20:50:43.163359    2943 logs.go:270] 0 containers: []
	W0813 20:50:43.163369    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:50:43.163379    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:50:43.163444    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:50:43.211051    2943 cri.go:76] found id: ""
	I0813 20:50:43.211080    2943 logs.go:270] 0 containers: []
	W0813 20:50:43.211090    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:50:43.211105    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:50:43.211173    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:50:43.250916    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:43.250942    2943 cri.go:76] found id: ""
	I0813 20:50:43.250949    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:50:43.251021    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:43.256511    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:50:43.256591    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:50:43.304800    2943 cri.go:76] found id: ""
	I0813 20:50:43.304830    2943 logs.go:270] 0 containers: []
	W0813 20:50:43.304840    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:50:43.304849    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:50:43.304916    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:50:43.343741    2943 cri.go:76] found id: ""
	I0813 20:50:43.343775    2943 logs.go:270] 0 containers: []
	W0813 20:50:43.343784    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:50:43.343794    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:50:43.343852    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:50:43.386205    2943 cri.go:76] found id: ""
	I0813 20:50:43.386230    2943 logs.go:270] 0 containers: []
	W0813 20:50:43.386254    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:50:43.386263    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:50:43.386326    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:50:43.437384    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:50:43.437414    2943 cri.go:76] found id: "2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:50:43.437421    2943 cri.go:76] found id: ""
	I0813 20:50:43.437428    2943 logs.go:270] 2 containers: [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5 2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996]
	I0813 20:50:43.437484    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:43.443061    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:43.447947    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:50:43.447971    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:50:43.768081    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:50:43.768137    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:50:43.860293    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:50:43.860338    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:50:43.961400    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:50:43.961431    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:50:43.961447    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:50:44.015133    2943 logs.go:123] Gathering logs for kube-controller-manager [2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996] ...
	I0813 20:50:44.015169    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2eb56428ce21057a331f473fecd0810d846a91b9cc8903850e130473342b9996"
	I0813 20:50:44.086790    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:50:44.086838    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:50:44.101925    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:50:44.101964    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:44.177880    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:50:44.177920    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:50:44.225701    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:50:44.225750    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:50:46.781792    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:50:46.782544    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:50:47.078943    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:50:47.079029    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:50:47.116800    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:50:47.116825    2943 cri.go:76] found id: ""
	I0813 20:50:47.116832    2943 logs.go:270] 1 containers: [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e]
	I0813 20:50:47.116878    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:47.122865    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:50:47.122926    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:50:47.156997    2943 cri.go:76] found id: ""
	I0813 20:50:47.157027    2943 logs.go:270] 0 containers: []
	W0813 20:50:47.157035    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:50:47.157044    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:50:47.157109    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:50:47.204069    2943 cri.go:76] found id: ""
	I0813 20:50:47.204103    2943 logs.go:270] 0 containers: []
	W0813 20:50:47.204110    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:50:47.204117    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:50:47.204178    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:50:47.239462    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:47.239485    2943 cri.go:76] found id: ""
	I0813 20:50:47.239492    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:50:47.239542    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:47.243883    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:50:47.243941    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:50:47.276581    2943 cri.go:76] found id: ""
	I0813 20:50:47.276608    2943 logs.go:270] 0 containers: []
	W0813 20:50:47.276617    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:50:47.276625    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:50:47.276690    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:50:47.315866    2943 cri.go:76] found id: ""
	I0813 20:50:47.315895    2943 logs.go:270] 0 containers: []
	W0813 20:50:47.315904    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:50:47.315913    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:50:47.315975    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:50:47.349904    2943 cri.go:76] found id: ""
	I0813 20:50:47.349927    2943 logs.go:270] 0 containers: []
	W0813 20:50:47.349933    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:50:47.349940    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:50:47.350005    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:50:47.386085    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:50:47.386119    2943 cri.go:76] found id: ""
	I0813 20:50:47.386128    2943 logs.go:270] 1 containers: [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5]
	I0813 20:50:47.386194    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:47.391369    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:50:47.391394    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:47.461377    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:50:47.461417    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:50:47.540408    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:50:47.540452    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:50:47.861198    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:50:47.861245    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:50:47.922646    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:50:47.922686    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:50:47.999743    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:50:47.999783    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:50:48.012734    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:50:48.012762    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:50:48.111170    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:50:48.111203    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:50:48.111221    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:50:50.664094    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:50:50.664961    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:50:51.079482    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:50:51.079559    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:50:51.123937    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:50:51.123969    2943 cri.go:76] found id: ""
	I0813 20:50:51.123977    2943 logs.go:270] 1 containers: [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e]
	I0813 20:50:51.124036    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:51.129419    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:50:51.129480    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:50:51.177462    2943 cri.go:76] found id: ""
	I0813 20:50:51.177486    2943 logs.go:270] 0 containers: []
	W0813 20:50:51.177493    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:50:51.177501    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:50:51.177554    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:50:51.216118    2943 cri.go:76] found id: ""
	I0813 20:50:51.216147    2943 logs.go:270] 0 containers: []
	W0813 20:50:51.216155    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:50:51.216162    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:50:51.216216    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:50:51.260361    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:51.260384    2943 cri.go:76] found id: ""
	I0813 20:50:51.260391    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:50:51.260444    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:51.266777    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:50:51.266843    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:50:51.309447    2943 cri.go:76] found id: ""
	I0813 20:50:51.309481    2943 logs.go:270] 0 containers: []
	W0813 20:50:51.309490    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:50:51.309498    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:50:51.309566    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:50:51.348871    2943 cri.go:76] found id: ""
	I0813 20:50:51.348904    2943 logs.go:270] 0 containers: []
	W0813 20:50:51.348915    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:50:51.348925    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:50:51.349017    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:50:51.388415    2943 cri.go:76] found id: ""
	I0813 20:50:51.388448    2943 logs.go:270] 0 containers: []
	W0813 20:50:51.388457    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:50:51.388466    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:50:51.388532    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:50:51.425110    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:50:51.425140    2943 cri.go:76] found id: ""
	I0813 20:50:51.425149    2943 logs.go:270] 1 containers: [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5]
	I0813 20:50:51.425209    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:51.430448    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:50:51.430473    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:50:51.442676    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:50:51.442711    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:50:51.514660    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:50:51.514687    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:50:51.514705    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:50:51.562504    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:50:51.562536    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:51.639309    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:50:51.639348    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:50:51.702042    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:50:51.702077    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:50:51.988594    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:50:51.988685    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:50:52.036434    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:50:52.036469    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:50:54.614970    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:50:54.615580    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:50:55.079542    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:50:55.079614    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:50:55.118163    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:50:55.118194    2943 cri.go:76] found id: ""
	I0813 20:50:55.118202    2943 logs.go:270] 1 containers: [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e]
	I0813 20:50:55.118255    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:55.124464    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:50:55.124527    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:50:55.161816    2943 cri.go:76] found id: ""
	I0813 20:50:55.161844    2943 logs.go:270] 0 containers: []
	W0813 20:50:55.161853    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:50:55.161862    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:50:55.161922    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:50:55.209066    2943 cri.go:76] found id: ""
	I0813 20:50:55.209094    2943 logs.go:270] 0 containers: []
	W0813 20:50:55.209101    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:50:55.209110    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:50:55.209172    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:50:55.254478    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:55.254501    2943 cri.go:76] found id: ""
	I0813 20:50:55.254509    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:50:55.254563    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:55.259625    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:50:55.259686    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:50:55.306918    2943 cri.go:76] found id: ""
	I0813 20:50:55.306944    2943 logs.go:270] 0 containers: []
	W0813 20:50:55.306952    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:50:55.306960    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:50:55.307018    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:50:55.356998    2943 cri.go:76] found id: ""
	I0813 20:50:55.357017    2943 logs.go:270] 0 containers: []
	W0813 20:50:55.357023    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:50:55.357029    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:50:55.357083    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:50:55.396180    2943 cri.go:76] found id: ""
	I0813 20:50:55.396205    2943 logs.go:270] 0 containers: []
	W0813 20:50:55.396212    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:50:55.396219    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:50:55.396276    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:50:55.434388    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:50:55.434412    2943 cri.go:76] found id: ""
	I0813 20:50:55.434420    2943 logs.go:270] 1 containers: [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5]
	I0813 20:50:55.434479    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:55.438820    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:50:55.438843    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:50:55.477872    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:50:55.477901    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:55.542682    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:50:55.542720    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:50:55.593477    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:50:55.593519    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:50:55.871961    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:50:55.871999    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:50:55.906794    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:50:55.906829    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:50:55.976099    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:50:55.976138    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:50:55.988580    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:50:55.988605    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:50:56.058500    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:50:58.559594    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:50:58.560345    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:50:58.579509    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:50:58.579591    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:50:58.619834    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:50:58.619857    2943 cri.go:76] found id: ""
	I0813 20:50:58.619864    2943 logs.go:270] 1 containers: [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e]
	I0813 20:50:58.619916    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:58.624661    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:50:58.624722    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:50:58.664899    2943 cri.go:76] found id: ""
	I0813 20:50:58.664922    2943 logs.go:270] 0 containers: []
	W0813 20:50:58.664930    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:50:58.664938    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:50:58.664995    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:50:58.703454    2943 cri.go:76] found id: ""
	I0813 20:50:58.703480    2943 logs.go:270] 0 containers: []
	W0813 20:50:58.703487    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:50:58.703494    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:50:58.703549    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:50:58.744848    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:58.744871    2943 cri.go:76] found id: ""
	I0813 20:50:58.744878    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:50:58.744932    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:58.749345    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:50:58.749399    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:50:58.786526    2943 cri.go:76] found id: ""
	I0813 20:50:58.786550    2943 logs.go:270] 0 containers: []
	W0813 20:50:58.786555    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:50:58.786563    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:50:58.786618    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:50:58.825164    2943 cri.go:76] found id: ""
	I0813 20:50:58.825188    2943 logs.go:270] 0 containers: []
	W0813 20:50:58.825195    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:50:58.825202    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:50:58.825259    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:50:58.863952    2943 cri.go:76] found id: ""
	I0813 20:50:58.863983    2943 logs.go:270] 0 containers: []
	W0813 20:50:58.863989    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:50:58.863996    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:50:58.864056    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:50:58.896607    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:50:58.896632    2943 cri.go:76] found id: ""
	I0813 20:50:58.896639    2943 logs.go:270] 1 containers: [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5]
	I0813 20:50:58.896686    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:50:58.900887    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:50:58.900929    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:50:58.937206    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:50:58.937245    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:50:58.990768    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:50:58.990805    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:50:59.035935    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:50:59.035964    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:50:59.294812    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:50:59.294870    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:50:59.333201    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:50:59.333235    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:50:59.400898    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:50:59.400937    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:50:59.412852    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:50:59.412891    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:50:59.481013    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:51:01.981354    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:51:01.982147    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:51:02.079574    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:51:02.079669    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:51:02.121620    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:02.121651    2943 cri.go:76] found id: ""
	I0813 20:51:02.121659    2943 logs.go:270] 1 containers: [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e]
	I0813 20:51:02.121713    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:02.126354    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:51:02.126416    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:51:02.164011    2943 cri.go:76] found id: ""
	I0813 20:51:02.164053    2943 logs.go:270] 0 containers: []
	W0813 20:51:02.164062    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:51:02.164070    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:51:02.164132    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:51:02.199929    2943 cri.go:76] found id: ""
	I0813 20:51:02.199956    2943 logs.go:270] 0 containers: []
	W0813 20:51:02.199962    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:51:02.199972    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:51:02.200036    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:51:02.236748    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:02.236780    2943 cri.go:76] found id: ""
	I0813 20:51:02.236788    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:51:02.236856    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:02.241433    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:51:02.241517    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:51:02.282148    2943 cri.go:76] found id: ""
	I0813 20:51:02.282183    2943 logs.go:270] 0 containers: []
	W0813 20:51:02.282193    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:51:02.282202    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:51:02.282275    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:51:02.319318    2943 cri.go:76] found id: ""
	I0813 20:51:02.319347    2943 logs.go:270] 0 containers: []
	W0813 20:51:02.319356    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:51:02.319364    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:51:02.319435    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:51:02.350930    2943 cri.go:76] found id: ""
	I0813 20:51:02.350959    2943 logs.go:270] 0 containers: []
	W0813 20:51:02.350967    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:51:02.350975    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:51:02.351034    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:51:02.384511    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:02.384541    2943 cri.go:76] found id: ""
	I0813 20:51:02.384548    2943 logs.go:270] 1 containers: [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5]
	I0813 20:51:02.384602    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:02.389237    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:51:02.389257    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:02.451215    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:51:02.451259    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:02.500711    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:51:02.500742    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:51:02.755844    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:51:02.755886    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:51:02.804130    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:51:02.804167    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:51:02.868620    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:51:02.868658    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:51:02.881870    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:51:02.881898    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:51:02.961301    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:51:02.961334    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:51:02.961350    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:05.502578    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:51:05.503247    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:51:05.579479    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:51:05.579574    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:51:05.623713    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:05.623747    2943 cri.go:76] found id: ""
	I0813 20:51:05.623755    2943 logs.go:270] 1 containers: [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e]
	I0813 20:51:05.623814    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:05.629390    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:51:05.629457    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:51:05.676044    2943 cri.go:76] found id: ""
	I0813 20:51:05.676074    2943 logs.go:270] 0 containers: []
	W0813 20:51:05.676081    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:51:05.676088    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:51:05.676155    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:51:05.725901    2943 cri.go:76] found id: ""
	I0813 20:51:05.725931    2943 logs.go:270] 0 containers: []
	W0813 20:51:05.725940    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:51:05.725948    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:51:05.726010    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:51:05.767858    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:05.767936    2943 cri.go:76] found id: ""
	I0813 20:51:05.767951    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:51:05.768016    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:05.773610    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:51:05.773685    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:51:05.823813    2943 cri.go:76] found id: ""
	I0813 20:51:05.823846    2943 logs.go:270] 0 containers: []
	W0813 20:51:05.823856    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:51:05.823867    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:51:05.823936    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:51:05.866326    2943 cri.go:76] found id: ""
	I0813 20:51:05.866356    2943 logs.go:270] 0 containers: []
	W0813 20:51:05.866365    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:51:05.866374    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:51:05.866443    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:51:05.924058    2943 cri.go:76] found id: ""
	I0813 20:51:05.924090    2943 logs.go:270] 0 containers: []
	W0813 20:51:05.924099    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:51:05.924108    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:51:05.924177    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:51:05.970027    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:05.970063    2943 cri.go:76] found id: ""
	I0813 20:51:05.970073    2943 logs.go:270] 1 containers: [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5]
	I0813 20:51:05.970153    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:05.975608    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:51:05.975641    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:06.034487    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:51:06.034529    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:51:06.321378    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:51:06.321424    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:51:06.364791    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:51:06.364830    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:51:06.439765    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:51:06.439800    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:51:06.450549    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:51:06.450576    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:51:06.519395    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:51:06.519423    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:51:06.519440    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:06.559709    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:51:06.559750    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:09.125508    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:51:09.126178    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:51:09.579749    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:51:09.579821    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:51:09.620165    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:09.620198    2943 cri.go:76] found id: ""
	I0813 20:51:09.620206    2943 logs.go:270] 1 containers: [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e]
	I0813 20:51:09.620285    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:09.624937    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:51:09.624993    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:51:09.663757    2943 cri.go:76] found id: ""
	I0813 20:51:09.663786    2943 logs.go:270] 0 containers: []
	W0813 20:51:09.663794    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:51:09.663802    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:51:09.663857    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:51:09.699491    2943 cri.go:76] found id: ""
	I0813 20:51:09.699521    2943 logs.go:270] 0 containers: []
	W0813 20:51:09.699529    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:51:09.699537    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:51:09.699596    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:51:09.733585    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:09.733626    2943 cri.go:76] found id: ""
	I0813 20:51:09.733647    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:51:09.733700    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:09.739121    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:51:09.739188    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:51:09.772762    2943 cri.go:76] found id: ""
	I0813 20:51:09.772791    2943 logs.go:270] 0 containers: []
	W0813 20:51:09.772799    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:51:09.772805    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:51:09.772858    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:51:09.808105    2943 cri.go:76] found id: ""
	I0813 20:51:09.808137    2943 logs.go:270] 0 containers: []
	W0813 20:51:09.808145    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:51:09.808153    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:51:09.808213    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:51:09.843468    2943 cri.go:76] found id: ""
	I0813 20:51:09.843496    2943 logs.go:270] 0 containers: []
	W0813 20:51:09.843503    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:51:09.843508    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:51:09.843559    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:51:09.876142    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:09.876173    2943 cri.go:76] found id: ""
	I0813 20:51:09.876181    2943 logs.go:270] 1 containers: [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5]
	I0813 20:51:09.876239    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:09.880918    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:51:09.880944    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:51:09.958356    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:51:09.958395    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:51:09.973509    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:51:09.973544    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:51:10.038678    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:51:10.038706    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:51:10.038726    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:10.083739    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:51:10.083776    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:10.151446    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:51:10.151489    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:10.214829    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:51:10.214872    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:51:10.498347    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:51:10.498383    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:51:13.044034    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:51:13.044625    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:51:13.079779    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:51:13.079841    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:51:13.115556    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:13.115587    2943 cri.go:76] found id: ""
	I0813 20:51:13.115595    2943 logs.go:270] 1 containers: [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e]
	I0813 20:51:13.115652    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:13.120555    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:51:13.120637    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:51:13.154536    2943 cri.go:76] found id: ""
	I0813 20:51:13.154558    2943 logs.go:270] 0 containers: []
	W0813 20:51:13.154564    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:51:13.154571    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:51:13.154619    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:51:13.186532    2943 cri.go:76] found id: ""
	I0813 20:51:13.186564    2943 logs.go:270] 0 containers: []
	W0813 20:51:13.186573    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:51:13.186582    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:51:13.186641    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:51:13.220479    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:13.220506    2943 cri.go:76] found id: ""
	I0813 20:51:13.220515    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:51:13.220567    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:13.225041    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:51:13.225095    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:51:13.261414    2943 cri.go:76] found id: ""
	I0813 20:51:13.261442    2943 logs.go:270] 0 containers: []
	W0813 20:51:13.261450    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:51:13.261458    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:51:13.261521    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:51:13.296299    2943 cri.go:76] found id: ""
	I0813 20:51:13.296328    2943 logs.go:270] 0 containers: []
	W0813 20:51:13.296336    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:51:13.296344    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:51:13.296395    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:51:13.335669    2943 cri.go:76] found id: ""
	I0813 20:51:13.335695    2943 logs.go:270] 0 containers: []
	W0813 20:51:13.335704    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:51:13.335711    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:51:13.335770    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:51:13.379479    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:13.379513    2943 cri.go:76] found id: ""
	I0813 20:51:13.379524    2943 logs.go:270] 1 containers: [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5]
	I0813 20:51:13.379585    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:13.385373    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:51:13.385396    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:51:13.439726    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:51:13.439758    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:51:13.514780    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:51:13.514820    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:51:13.530823    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:51:13.530889    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:51:13.600992    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:51:13.601024    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:51:13.601043    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:13.643418    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:51:13.643450    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:13.728759    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:51:13.728814    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:13.782704    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:51:13.782754    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:51:16.573009    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:51:16.573749    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:51:16.579932    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:51:16.580019    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:51:16.653435    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:16.653466    2943 cri.go:76] found id: ""
	I0813 20:51:16.653474    2943 logs.go:270] 1 containers: [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e]
	I0813 20:51:16.653530    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:16.660465    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:51:16.660542    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:51:16.720436    2943 cri.go:76] found id: ""
	I0813 20:51:16.720465    2943 logs.go:270] 0 containers: []
	W0813 20:51:16.720474    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:51:16.720481    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:51:16.720543    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:51:16.783182    2943 cri.go:76] found id: ""
	I0813 20:51:16.783207    2943 logs.go:270] 0 containers: []
	W0813 20:51:16.783215    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:51:16.783223    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:51:16.783281    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:51:16.829664    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:16.829697    2943 cri.go:76] found id: ""
	I0813 20:51:16.829705    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:51:16.829765    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:16.837191    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:51:16.837279    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:51:16.892368    2943 cri.go:76] found id: ""
	I0813 20:51:16.892402    2943 logs.go:270] 0 containers: []
	W0813 20:51:16.892412    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:51:16.892422    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:51:16.892496    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:51:16.949430    2943 cri.go:76] found id: ""
	I0813 20:51:16.949462    2943 logs.go:270] 0 containers: []
	W0813 20:51:16.949471    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:51:16.949480    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:51:16.949546    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:51:17.009553    2943 cri.go:76] found id: ""
	I0813 20:51:17.009583    2943 logs.go:270] 0 containers: []
	W0813 20:51:17.009592    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:51:17.009602    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:51:17.009671    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:51:17.064675    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:17.064704    2943 cri.go:76] found id: ""
	I0813 20:51:17.064713    2943 logs.go:270] 1 containers: [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5]
	I0813 20:51:17.064773    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:17.071311    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:51:17.071342    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:17.157621    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:51:17.157661    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:17.224949    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:51:17.224990    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:51:17.583283    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:51:17.583326    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:51:17.653107    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:51:17.653150    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:51:17.737313    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:51:17.737356    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:51:17.750941    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:51:17.750975    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:51:17.837048    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:51:17.837078    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:51:17.837094    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:20.389543    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:51:20.390278    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:51:20.579641    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:51:20.579743    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:51:20.622537    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:20.622565    2943 cri.go:76] found id: ""
	I0813 20:51:20.622573    2943 logs.go:270] 1 containers: [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e]
	I0813 20:51:20.622631    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:20.628589    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:51:20.628665    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:51:20.681091    2943 cri.go:76] found id: ""
	I0813 20:51:20.681123    2943 logs.go:270] 0 containers: []
	W0813 20:51:20.681131    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:51:20.681140    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:51:20.681213    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:51:20.728879    2943 cri.go:76] found id: ""
	I0813 20:51:20.728907    2943 logs.go:270] 0 containers: []
	W0813 20:51:20.728915    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:51:20.728923    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:51:20.728988    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:51:20.778461    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:20.778491    2943 cri.go:76] found id: ""
	I0813 20:51:20.778499    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:51:20.778570    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:20.786499    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:51:20.786576    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:51:20.836097    2943 cri.go:76] found id: ""
	I0813 20:51:20.836127    2943 logs.go:270] 0 containers: []
	W0813 20:51:20.836142    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:51:20.836151    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:51:20.836214    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:51:20.880826    2943 cri.go:76] found id: ""
	I0813 20:51:20.880858    2943 logs.go:270] 0 containers: []
	W0813 20:51:20.880867    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:51:20.880876    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:51:20.880944    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:51:20.932641    2943 cri.go:76] found id: ""
	I0813 20:51:20.932673    2943 logs.go:270] 0 containers: []
	W0813 20:51:20.932681    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:51:20.932690    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:51:20.932767    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:51:20.981702    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:20.981740    2943 cri.go:76] found id: ""
	I0813 20:51:20.981749    2943 logs.go:270] 1 containers: [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5]
	I0813 20:51:20.981808    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:20.988233    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:51:20.988260    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:21.063837    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:51:21.063884    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:51:21.400636    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:51:21.400687    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:51:21.451231    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:51:21.451269    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:51:21.545982    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:51:21.546039    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:51:21.567657    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:51:21.567700    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:51:21.647384    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:51:21.647417    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:51:21.647432    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:21.709500    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:51:21.709543    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:24.290521    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:51:24.291260    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:51:24.579775    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:51:24.579850    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:51:24.620704    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:24.620740    2943 cri.go:76] found id: ""
	I0813 20:51:24.620748    2943 logs.go:270] 1 containers: [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e]
	I0813 20:51:24.620813    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:24.627387    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:51:24.627464    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:51:24.680379    2943 cri.go:76] found id: ""
	I0813 20:51:24.680405    2943 logs.go:270] 0 containers: []
	W0813 20:51:24.680414    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:51:24.680422    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:51:24.680485    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:51:24.729169    2943 cri.go:76] found id: ""
	I0813 20:51:24.729208    2943 logs.go:270] 0 containers: []
	W0813 20:51:24.729218    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:51:24.729227    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:51:24.729296    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:51:24.787291    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:24.787322    2943 cri.go:76] found id: ""
	I0813 20:51:24.787330    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:51:24.787418    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:24.795310    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:51:24.795387    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:51:24.839393    2943 cri.go:76] found id: ""
	I0813 20:51:24.839439    2943 logs.go:270] 0 containers: []
	W0813 20:51:24.839448    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:51:24.839457    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:51:24.839519    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:51:24.881275    2943 cri.go:76] found id: ""
	I0813 20:51:24.881311    2943 logs.go:270] 0 containers: []
	W0813 20:51:24.881321    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:51:24.881331    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:51:24.881398    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:51:24.921727    2943 cri.go:76] found id: ""
	I0813 20:51:24.921759    2943 logs.go:270] 0 containers: []
	W0813 20:51:24.921768    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:51:24.921777    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:51:24.921840    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:51:24.960351    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:24.960385    2943 cri.go:76] found id: ""
	I0813 20:51:24.960394    2943 logs.go:270] 1 containers: [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5]
	I0813 20:51:24.960454    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:24.970428    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:51:24.970455    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:51:25.036364    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:51:27.749032    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:51:27.749067    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:27.827700    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:51:27.827742    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:27.908608    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:51:27.908652    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:27.991464    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:51:27.991516    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:51:28.390578    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:51:28.390631    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:51:28.479789    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:51:28.479829    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:51:28.565282    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:51:28.565317    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:51:31.080001    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:51:36.081124    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:51:36.579389    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:51:36.579476    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:51:36.624593    2943 cri.go:76] found id: "8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:51:36.624625    2943 cri.go:76] found id: "b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:36.624632    2943 cri.go:76] found id: ""
	I0813 20:51:36.624640    2943 logs.go:270] 2 containers: [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e]
	I0813 20:51:36.624699    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:36.631091    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:36.636619    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:51:36.636692    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:51:36.679425    2943 cri.go:76] found id: ""
	I0813 20:51:36.679454    2943 logs.go:270] 0 containers: []
	W0813 20:51:36.679463    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:51:36.679472    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:51:36.679543    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:51:36.721679    2943 cri.go:76] found id: ""
	I0813 20:51:36.721707    2943 logs.go:270] 0 containers: []
	W0813 20:51:36.721720    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:51:36.721734    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:51:36.721806    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:51:36.769661    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:36.769693    2943 cri.go:76] found id: ""
	I0813 20:51:36.769701    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:51:36.769769    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:36.775201    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:51:36.775268    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:51:36.816504    2943 cri.go:76] found id: ""
	I0813 20:51:36.816528    2943 logs.go:270] 0 containers: []
	W0813 20:51:36.816535    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:51:36.816544    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:51:36.816617    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:51:36.855963    2943 cri.go:76] found id: ""
	I0813 20:51:36.855991    2943 logs.go:270] 0 containers: []
	W0813 20:51:36.856000    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:51:36.856017    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:51:36.856074    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:51:36.899745    2943 cri.go:76] found id: ""
	I0813 20:51:36.899772    2943 logs.go:270] 0 containers: []
	W0813 20:51:36.899779    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:51:36.899787    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:51:36.899836    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:51:36.942224    2943 cri.go:76] found id: "78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:51:36.942251    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:36.942260    2943 cri.go:76] found id: ""
	I0813 20:51:36.942269    2943 logs.go:270] 2 containers: [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5]
	I0813 20:51:36.942364    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:36.948122    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:36.954863    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:51:36.954891    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:51:37.034276    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:51:37.034313    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:51:37.049083    2943 logs.go:123] Gathering logs for kube-apiserver [b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e] ...
	I0813 20:51:37.049116    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b92e70a9a85432a95b0bcc2dea79e0aa81b2deb435825abd93a37368736a551e"
	I0813 20:51:37.103790    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:51:37.103835    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:37.183691    2943 logs.go:123] Gathering logs for kube-controller-manager [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009] ...
	I0813 20:51:37.183744    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:51:37.228634    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:51:37.228670    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:51:37.507225    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:51:37.507271    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 20:51:49.879238    2943 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (12.371940475s)
	W0813 20:51:49.879286    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:51:49.879306    2943 logs.go:123] Gathering logs for kube-apiserver [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174] ...
	I0813 20:51:49.879318    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:51:49.926493    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:51:49.926532    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:49.979978    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:51:49.980019    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:51:52.526420    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:51:52.527149    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:51:52.579318    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:51:52.579405    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:51:52.620538    2943 cri.go:76] found id: "8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:51:52.620572    2943 cri.go:76] found id: ""
	I0813 20:51:52.620597    2943 logs.go:270] 1 containers: [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174]
	I0813 20:51:52.620668    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:52.625790    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:51:52.625857    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:51:52.660717    2943 cri.go:76] found id: ""
	I0813 20:51:52.660751    2943 logs.go:270] 0 containers: []
	W0813 20:51:52.660760    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:51:52.660768    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:51:52.660830    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:51:52.698676    2943 cri.go:76] found id: ""
	I0813 20:51:52.698703    2943 logs.go:270] 0 containers: []
	W0813 20:51:52.698712    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:51:52.698720    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:51:52.698785    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:51:52.742228    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:52.742256    2943 cri.go:76] found id: ""
	I0813 20:51:52.742263    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:51:52.742318    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:52.747912    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:51:52.747974    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:51:52.790874    2943 cri.go:76] found id: ""
	I0813 20:51:52.790902    2943 logs.go:270] 0 containers: []
	W0813 20:51:52.790908    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:51:52.790915    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:51:52.790980    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:51:52.828020    2943 cri.go:76] found id: ""
	I0813 20:51:52.828054    2943 logs.go:270] 0 containers: []
	W0813 20:51:52.828064    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:51:52.828073    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:51:52.828133    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:51:52.864989    2943 cri.go:76] found id: ""
	I0813 20:51:52.865019    2943 logs.go:270] 0 containers: []
	W0813 20:51:52.865028    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:51:52.865036    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:51:52.865101    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:51:52.901451    2943 cri.go:76] found id: "78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:51:52.901481    2943 cri.go:76] found id: "0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:52.901488    2943 cri.go:76] found id: ""
	I0813 20:51:52.901495    2943 logs.go:270] 2 containers: [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5]
	I0813 20:51:52.901554    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:52.906322    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:52.910336    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:51:52.910360    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:51:52.975766    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:51:52.975810    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:51:53.094513    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:51:53.094544    2943 logs.go:123] Gathering logs for kube-controller-manager [0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5] ...
	I0813 20:51:53.094561    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0c1735888f8c88b0b83fdcf774669ad91c81220b04ec6a853cf04cc4739f98b5"
	I0813 20:51:53.152168    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:51:53.152204    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:51:53.470306    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:51:53.470352    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:51:53.482199    2943 logs.go:123] Gathering logs for kube-apiserver [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174] ...
	I0813 20:51:53.482230    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:51:53.520713    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:51:53.520743    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:53.575296    2943 logs.go:123] Gathering logs for kube-controller-manager [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009] ...
	I0813 20:51:53.575326    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:51:53.629931    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:51:53.629973    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:51:56.182201    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:51:56.182949    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:51:56.579487    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:51:56.579568    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:51:56.619091    2943 cri.go:76] found id: "8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:51:56.619121    2943 cri.go:76] found id: ""
	I0813 20:51:56.619129    2943 logs.go:270] 1 containers: [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174]
	I0813 20:51:56.619191    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:56.624691    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:51:56.624750    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:51:56.661529    2943 cri.go:76] found id: ""
	I0813 20:51:56.661559    2943 logs.go:270] 0 containers: []
	W0813 20:51:56.661567    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:51:56.661576    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:51:56.661641    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:51:56.699735    2943 cri.go:76] found id: ""
	I0813 20:51:56.699762    2943 logs.go:270] 0 containers: []
	W0813 20:51:56.699770    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:51:56.699777    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:51:56.699832    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:51:56.737889    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:56.737918    2943 cri.go:76] found id: ""
	I0813 20:51:56.737928    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:51:56.737985    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:56.743775    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:51:56.743845    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:51:56.780542    2943 cri.go:76] found id: ""
	I0813 20:51:56.780572    2943 logs.go:270] 0 containers: []
	W0813 20:51:56.780581    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:51:56.780589    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:51:56.780650    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:51:56.819706    2943 cri.go:76] found id: ""
	I0813 20:51:56.819761    2943 logs.go:270] 0 containers: []
	W0813 20:51:56.819770    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:51:56.819779    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:51:56.819842    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:51:56.853833    2943 cri.go:76] found id: ""
	I0813 20:51:56.853861    2943 logs.go:270] 0 containers: []
	W0813 20:51:56.853867    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:51:56.853874    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:51:56.853943    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:51:56.893727    2943 cri.go:76] found id: "78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:51:56.893756    2943 cri.go:76] found id: ""
	I0813 20:51:56.893764    2943 logs.go:270] 1 containers: [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009]
	I0813 20:51:56.893822    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:51:56.898434    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:51:56.898464    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:51:56.976965    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:51:56.977001    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:51:56.991578    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:51:56.991619    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:51:57.064360    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:51:57.064388    2943 logs.go:123] Gathering logs for kube-apiserver [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174] ...
	I0813 20:51:57.064404    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:51:57.105623    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:51:57.105658    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:51:57.165053    2943 logs.go:123] Gathering logs for kube-controller-manager [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009] ...
	I0813 20:51:57.165088    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:51:57.230207    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:51:57.230249    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:51:57.525049    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:51:57.525099    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:52:00.072114    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:52:00.072871    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:52:00.079056    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:52:00.079133    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:52:00.119320    2943 cri.go:76] found id: "8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:00.119354    2943 cri.go:76] found id: ""
	I0813 20:52:00.119364    2943 logs.go:270] 1 containers: [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174]
	I0813 20:52:00.119419    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:00.125505    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:52:00.125564    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:52:00.175389    2943 cri.go:76] found id: ""
	I0813 20:52:00.175421    2943 logs.go:270] 0 containers: []
	W0813 20:52:00.175430    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:52:00.175438    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:52:00.175499    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:52:00.228686    2943 cri.go:76] found id: ""
	I0813 20:52:00.228717    2943 logs.go:270] 0 containers: []
	W0813 20:52:00.228726    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:52:00.228734    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:52:00.228792    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:52:00.275422    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:00.275449    2943 cri.go:76] found id: ""
	I0813 20:52:00.275457    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:52:00.275513    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:00.281100    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:52:00.281170    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:52:00.322081    2943 cri.go:76] found id: ""
	I0813 20:52:00.322122    2943 logs.go:270] 0 containers: []
	W0813 20:52:00.322130    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:52:00.322139    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:52:00.322195    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:52:00.375384    2943 cri.go:76] found id: ""
	I0813 20:52:00.375408    2943 logs.go:270] 0 containers: []
	W0813 20:52:00.375417    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:52:00.375425    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:52:00.375477    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:52:00.421103    2943 cri.go:76] found id: ""
	I0813 20:52:00.421123    2943 logs.go:270] 0 containers: []
	W0813 20:52:00.421128    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:52:00.421136    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:52:00.421194    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:52:00.464327    2943 cri.go:76] found id: "78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:00.464356    2943 cri.go:76] found id: ""
	I0813 20:52:00.464365    2943 logs.go:270] 1 containers: [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009]
	I0813 20:52:00.464425    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:00.470701    2943 logs.go:123] Gathering logs for kube-apiserver [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174] ...
	I0813 20:52:00.470731    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:00.514747    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:52:00.514786    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:00.587406    2943 logs.go:123] Gathering logs for kube-controller-manager [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009] ...
	I0813 20:52:00.587447    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:00.640420    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:52:00.640452    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:52:00.976304    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:52:00.976349    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:52:01.023547    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:52:01.023589    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:52:01.118551    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:52:01.118600    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:52:01.131201    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:52:01.131236    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:52:01.210061    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:52:03.710928    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:52:03.711823    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:52:04.079264    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:52:04.079355    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:52:04.118557    2943 cri.go:76] found id: "8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:04.118588    2943 cri.go:76] found id: ""
	I0813 20:52:04.118597    2943 logs.go:270] 1 containers: [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174]
	I0813 20:52:04.118654    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:04.123273    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:52:04.123337    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:52:04.158135    2943 cri.go:76] found id: ""
	I0813 20:52:04.158163    2943 logs.go:270] 0 containers: []
	W0813 20:52:04.158172    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:52:04.158179    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:52:04.158264    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:52:04.201789    2943 cri.go:76] found id: ""
	I0813 20:52:04.201815    2943 logs.go:270] 0 containers: []
	W0813 20:52:04.201825    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:52:04.201834    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:52:04.201896    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:52:04.243788    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:04.243822    2943 cri.go:76] found id: ""
	I0813 20:52:04.243832    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:52:04.243890    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:04.249580    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:52:04.249674    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:52:04.291287    2943 cri.go:76] found id: ""
	I0813 20:52:04.291315    2943 logs.go:270] 0 containers: []
	W0813 20:52:04.291324    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:52:04.291332    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:52:04.291395    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:52:04.336064    2943 cri.go:76] found id: ""
	I0813 20:52:04.336097    2943 logs.go:270] 0 containers: []
	W0813 20:52:04.336106    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:52:04.336114    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:52:04.336176    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:52:04.371363    2943 cri.go:76] found id: ""
	I0813 20:52:04.371390    2943 logs.go:270] 0 containers: []
	W0813 20:52:04.371398    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:52:04.371407    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:52:04.371470    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:52:04.411196    2943 cri.go:76] found id: "78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:04.411235    2943 cri.go:76] found id: ""
	I0813 20:52:04.411244    2943 logs.go:270] 1 containers: [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009]
	I0813 20:52:04.411306    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:04.416964    2943 logs.go:123] Gathering logs for kube-apiserver [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174] ...
	I0813 20:52:04.417005    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:04.458131    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:52:04.458165    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:04.516207    2943 logs.go:123] Gathering logs for kube-controller-manager [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009] ...
	I0813 20:52:04.516245    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:04.575530    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:52:04.575573    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:52:04.864976    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:52:04.865034    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:52:04.921762    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:52:04.921803    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:52:04.991796    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:52:04.991833    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:52:05.006281    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:52:05.006322    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:52:05.071141    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:52:07.571803    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:52:07.572440    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:52:07.579572    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:52:07.579634    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:52:07.618653    2943 cri.go:76] found id: "8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:07.618685    2943 cri.go:76] found id: ""
	I0813 20:52:07.618695    2943 logs.go:270] 1 containers: [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174]
	I0813 20:52:07.618754    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:07.624793    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:52:07.624877    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:52:07.670737    2943 cri.go:76] found id: ""
	I0813 20:52:07.670764    2943 logs.go:270] 0 containers: []
	W0813 20:52:07.670773    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:52:07.670780    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:52:07.670842    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:52:07.715237    2943 cri.go:76] found id: ""
	I0813 20:52:07.715266    2943 logs.go:270] 0 containers: []
	W0813 20:52:07.715274    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:52:07.715284    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:52:07.715351    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:52:07.767674    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:07.767705    2943 cri.go:76] found id: ""
	I0813 20:52:07.767713    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:52:07.767772    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:07.775303    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:52:07.775371    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:52:07.820277    2943 cri.go:76] found id: ""
	I0813 20:52:07.820307    2943 logs.go:270] 0 containers: []
	W0813 20:52:07.820316    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:52:07.820325    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:52:07.820389    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:52:07.865096    2943 cri.go:76] found id: ""
	I0813 20:52:07.865129    2943 logs.go:270] 0 containers: []
	W0813 20:52:07.865138    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:52:07.865153    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:52:07.865218    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:52:07.905416    2943 cri.go:76] found id: ""
	I0813 20:52:07.905443    2943 logs.go:270] 0 containers: []
	W0813 20:52:07.905452    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:52:07.905461    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:52:07.905524    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:52:07.946989    2943 cri.go:76] found id: "78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:07.947018    2943 cri.go:76] found id: ""
	I0813 20:52:07.947025    2943 logs.go:270] 1 containers: [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009]
	I0813 20:52:07.947074    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:07.952580    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:52:07.952607    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:52:08.022295    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:52:08.022318    2943 logs.go:123] Gathering logs for kube-apiserver [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174] ...
	I0813 20:52:08.022330    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:08.068485    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:52:08.068520    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:08.146833    2943 logs.go:123] Gathering logs for kube-controller-manager [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009] ...
	I0813 20:52:08.146895    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:08.197358    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:52:08.197390    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:52:08.481348    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:52:08.481396    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:52:08.520220    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:52:08.520262    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:52:08.591095    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:52:08.591133    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:52:11.103714    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:52:11.104423    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:52:11.578974    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:52:11.579071    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:52:11.623465    2943 cri.go:76] found id: "8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:11.623494    2943 cri.go:76] found id: ""
	I0813 20:52:11.623502    2943 logs.go:270] 1 containers: [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174]
	I0813 20:52:11.623557    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:11.629008    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:52:11.629076    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:52:11.667083    2943 cri.go:76] found id: ""
	I0813 20:52:11.667116    2943 logs.go:270] 0 containers: []
	W0813 20:52:11.667125    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:52:11.667139    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:52:11.667209    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:52:11.706879    2943 cri.go:76] found id: ""
	I0813 20:52:11.706907    2943 logs.go:270] 0 containers: []
	W0813 20:52:11.706916    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:52:11.706924    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:52:11.706985    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:52:11.744244    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:11.744267    2943 cri.go:76] found id: ""
	I0813 20:52:11.744273    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:52:11.744320    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:11.748627    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:52:11.748692    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:52:11.785175    2943 cri.go:76] found id: ""
	I0813 20:52:11.785202    2943 logs.go:270] 0 containers: []
	W0813 20:52:11.785209    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:52:11.785219    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:52:11.785282    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:52:11.821514    2943 cri.go:76] found id: ""
	I0813 20:52:11.821543    2943 logs.go:270] 0 containers: []
	W0813 20:52:11.821551    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:52:11.821560    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:52:11.821617    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:52:11.861859    2943 cri.go:76] found id: ""
	I0813 20:52:11.861893    2943 logs.go:270] 0 containers: []
	W0813 20:52:11.861902    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:52:11.861915    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:52:11.861984    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:52:11.900490    2943 cri.go:76] found id: "78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:11.900522    2943 cri.go:76] found id: ""
	I0813 20:52:11.900530    2943 logs.go:270] 1 containers: [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009]
	I0813 20:52:11.900593    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:11.904989    2943 logs.go:123] Gathering logs for kube-controller-manager [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009] ...
	I0813 20:52:11.905014    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:11.962822    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:52:11.962871    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:52:12.232901    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:52:12.232937    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:52:12.290301    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:52:12.290342    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:52:12.365663    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:52:12.365701    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:52:12.379384    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:52:12.379418    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:52:12.454199    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:52:12.454231    2943 logs.go:123] Gathering logs for kube-apiserver [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174] ...
	I0813 20:52:12.454249    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:12.500186    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:52:12.500226    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:15.069520    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:52:15.070158    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:52:15.079359    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:52:15.079429    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:52:15.118462    2943 cri.go:76] found id: "8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:15.118490    2943 cri.go:76] found id: ""
	I0813 20:52:15.118497    2943 logs.go:270] 1 containers: [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174]
	I0813 20:52:15.118552    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:15.123385    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:52:15.123454    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:52:15.160454    2943 cri.go:76] found id: ""
	I0813 20:52:15.160478    2943 logs.go:270] 0 containers: []
	W0813 20:52:15.160483    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:52:15.160490    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:52:15.160546    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:52:15.200157    2943 cri.go:76] found id: ""
	I0813 20:52:15.200189    2943 logs.go:270] 0 containers: []
	W0813 20:52:15.200197    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:52:15.200205    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:52:15.200263    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:52:15.235484    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:15.235534    2943 cri.go:76] found id: ""
	I0813 20:52:15.235546    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:52:15.235620    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:15.240583    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:52:15.240663    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:52:15.285888    2943 cri.go:76] found id: ""
	I0813 20:52:15.285919    2943 logs.go:270] 0 containers: []
	W0813 20:52:15.285928    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:52:15.285937    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:52:15.286000    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:52:15.329513    2943 cri.go:76] found id: ""
	I0813 20:52:15.329551    2943 logs.go:270] 0 containers: []
	W0813 20:52:15.329561    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:52:15.329570    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:52:15.329640    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:52:15.372834    2943 cri.go:76] found id: ""
	I0813 20:52:15.372869    2943 logs.go:270] 0 containers: []
	W0813 20:52:15.372877    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:52:15.372886    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:52:15.372954    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:52:15.414653    2943 cri.go:76] found id: "78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:15.414687    2943 cri.go:76] found id: ""
	I0813 20:52:15.414696    2943 logs.go:270] 1 containers: [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009]
	I0813 20:52:15.414764    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:15.419878    2943 logs.go:123] Gathering logs for kube-apiserver [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174] ...
	I0813 20:52:15.419935    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:15.464642    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:52:15.464689    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:15.539194    2943 logs.go:123] Gathering logs for kube-controller-manager [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009] ...
	I0813 20:52:15.539236    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:15.599732    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:52:15.599770    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:52:15.885629    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:52:15.885674    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:52:15.932420    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:52:15.932454    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:52:16.001488    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:52:16.001524    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:52:16.015431    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:52:16.015472    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:52:16.107526    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:52:18.608092    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:52:18.608735    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:52:19.079259    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:52:19.079342    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:52:19.113679    2943 cri.go:76] found id: "8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:19.113702    2943 cri.go:76] found id: ""
	I0813 20:52:19.113709    2943 logs.go:270] 1 containers: [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174]
	I0813 20:52:19.113757    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:19.118904    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:52:19.118975    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:52:19.155213    2943 cri.go:76] found id: ""
	I0813 20:52:19.155242    2943 logs.go:270] 0 containers: []
	W0813 20:52:19.155251    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:52:19.155259    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:52:19.155331    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:52:19.191717    2943 cri.go:76] found id: ""
	I0813 20:52:19.191738    2943 logs.go:270] 0 containers: []
	W0813 20:52:19.191744    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:52:19.191750    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:52:19.191793    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:52:19.227902    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:19.227929    2943 cri.go:76] found id: ""
	I0813 20:52:19.227938    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:52:19.227999    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:19.233161    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:52:19.233214    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:52:19.268417    2943 cri.go:76] found id: ""
	I0813 20:52:19.268439    2943 logs.go:270] 0 containers: []
	W0813 20:52:19.268445    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:52:19.268451    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:52:19.268497    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:52:19.307192    2943 cri.go:76] found id: ""
	I0813 20:52:19.307218    2943 logs.go:270] 0 containers: []
	W0813 20:52:19.307224    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:52:19.307231    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:52:19.307280    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:52:19.347351    2943 cri.go:76] found id: ""
	I0813 20:52:19.347378    2943 logs.go:270] 0 containers: []
	W0813 20:52:19.347388    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:52:19.347396    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:52:19.347466    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:52:19.391686    2943 cri.go:76] found id: "78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:19.391722    2943 cri.go:76] found id: ""
	I0813 20:52:19.391730    2943 logs.go:270] 1 containers: [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009]
	I0813 20:52:19.391790    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:19.397549    2943 logs.go:123] Gathering logs for kube-controller-manager [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009] ...
	I0813 20:52:19.397579    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:19.449405    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:52:19.449455    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:52:19.737554    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:52:19.737608    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:52:19.780275    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:52:19.780315    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:52:19.854046    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:52:19.854079    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:52:19.866179    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:52:19.866207    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:52:19.937099    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:52:19.937123    2943 logs.go:123] Gathering logs for kube-apiserver [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174] ...
	I0813 20:52:19.937142    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:19.981389    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:52:19.981426    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:22.545961    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:52:22.546639    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:52:22.579795    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:52:22.579867    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:52:22.611845    2943 cri.go:76] found id: "8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:22.611879    2943 cri.go:76] found id: ""
	I0813 20:52:22.611888    2943 logs.go:270] 1 containers: [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174]
	I0813 20:52:22.611948    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:22.616052    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:52:22.616118    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:52:22.648668    2943 cri.go:76] found id: ""
	I0813 20:52:22.648696    2943 logs.go:270] 0 containers: []
	W0813 20:52:22.648705    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:52:22.648712    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:52:22.648774    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:52:22.685556    2943 cri.go:76] found id: ""
	I0813 20:52:22.685595    2943 logs.go:270] 0 containers: []
	W0813 20:52:22.685601    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:52:22.685608    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:52:22.685658    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:52:22.718911    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:22.718942    2943 cri.go:76] found id: ""
	I0813 20:52:22.718951    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:52:22.719005    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:22.723434    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:52:22.723510    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:52:22.755494    2943 cri.go:76] found id: ""
	I0813 20:52:22.755522    2943 logs.go:270] 0 containers: []
	W0813 20:52:22.755530    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:52:22.755538    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:52:22.755596    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:52:22.788461    2943 cri.go:76] found id: ""
	I0813 20:52:22.788485    2943 logs.go:270] 0 containers: []
	W0813 20:52:22.788490    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:52:22.788496    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:52:22.788548    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:52:22.824300    2943 cri.go:76] found id: ""
	I0813 20:52:22.824329    2943 logs.go:270] 0 containers: []
	W0813 20:52:22.824335    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:52:22.824342    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:52:22.824401    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:52:22.859469    2943 cri.go:76] found id: "78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:22.859497    2943 cri.go:76] found id: ""
	I0813 20:52:22.859506    2943 logs.go:270] 1 containers: [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009]
	I0813 20:52:22.859563    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:22.864126    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:52:22.864152    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:52:22.931096    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:52:22.931130    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:52:22.945457    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:52:22.945494    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:52:23.021630    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:52:23.021658    2943 logs.go:123] Gathering logs for kube-apiserver [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174] ...
	I0813 20:52:23.021676    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:23.069707    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:52:23.069750    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:23.140087    2943 logs.go:123] Gathering logs for kube-controller-manager [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009] ...
	I0813 20:52:23.140126    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:23.203939    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:52:23.203982    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:52:23.464689    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:52:23.464732    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:52:26.008058    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:52:26.008821    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:52:26.079113    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:52:26.079206    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:52:26.134593    2943 cri.go:76] found id: "8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:26.134622    2943 cri.go:76] found id: ""
	I0813 20:52:26.134631    2943 logs.go:270] 1 containers: [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174]
	I0813 20:52:26.134692    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:26.140821    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:52:26.140896    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:52:26.190724    2943 cri.go:76] found id: ""
	I0813 20:52:26.190753    2943 logs.go:270] 0 containers: []
	W0813 20:52:26.190761    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:52:26.190770    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:52:26.190830    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:52:26.236826    2943 cri.go:76] found id: ""
	I0813 20:52:26.236859    2943 logs.go:270] 0 containers: []
	W0813 20:52:26.236869    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:52:26.236878    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:52:26.236945    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:52:26.277390    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:26.277416    2943 cri.go:76] found id: ""
	I0813 20:52:26.277424    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:52:26.277480    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:26.282189    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:52:26.282263    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:52:26.326248    2943 cri.go:76] found id: ""
	I0813 20:52:26.326279    2943 logs.go:270] 0 containers: []
	W0813 20:52:26.326289    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:52:26.326297    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:52:26.326366    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:52:26.367929    2943 cri.go:76] found id: ""
	I0813 20:52:26.367964    2943 logs.go:270] 0 containers: []
	W0813 20:52:26.367973    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:52:26.367982    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:52:26.368048    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:52:26.424300    2943 cri.go:76] found id: ""
	I0813 20:52:26.424334    2943 logs.go:270] 0 containers: []
	W0813 20:52:26.424344    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:52:26.424353    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:52:26.424416    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:52:26.475920    2943 cri.go:76] found id: "78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:26.475949    2943 cri.go:76] found id: ""
	I0813 20:52:26.475957    2943 logs.go:270] 1 containers: [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009]
	I0813 20:52:26.476029    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:26.481861    2943 logs.go:123] Gathering logs for kube-controller-manager [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009] ...
	I0813 20:52:26.481896    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:26.558747    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:52:26.558792    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:52:26.882110    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:52:26.882156    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:52:26.931316    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:52:26.931360    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:52:27.023339    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:52:27.023395    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:52:27.039152    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:52:27.039194    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:52:27.112893    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:52:27.112925    2943 logs.go:123] Gathering logs for kube-apiserver [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174] ...
	I0813 20:52:27.112940    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:27.158555    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:52:27.158595    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:29.731664    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:52:29.732641    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:52:30.079441    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 20:52:30.079537    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 20:52:30.126055    2943 cri.go:76] found id: "8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:30.126136    2943 cri.go:76] found id: ""
	I0813 20:52:30.126156    2943 logs.go:270] 1 containers: [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174]
	I0813 20:52:30.126227    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:30.132975    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 20:52:30.133090    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 20:52:30.174662    2943 cri.go:76] found id: ""
	I0813 20:52:30.174693    2943 logs.go:270] 0 containers: []
	W0813 20:52:30.174701    2943 logs.go:272] No container was found matching "etcd"
	I0813 20:52:30.174710    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 20:52:30.174780    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 20:52:30.222728    2943 cri.go:76] found id: ""
	I0813 20:52:30.222760    2943 logs.go:270] 0 containers: []
	W0813 20:52:30.222769    2943 logs.go:272] No container was found matching "coredns"
	I0813 20:52:30.222778    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 20:52:30.222836    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 20:52:30.265944    2943 cri.go:76] found id: "ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:30.265976    2943 cri.go:76] found id: ""
	I0813 20:52:30.265985    2943 logs.go:270] 1 containers: [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f]
	I0813 20:52:30.266053    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:30.270709    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 20:52:30.270778    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 20:52:30.313923    2943 cri.go:76] found id: ""
	I0813 20:52:30.313949    2943 logs.go:270] 0 containers: []
	W0813 20:52:30.313972    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 20:52:30.313983    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 20:52:30.314066    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 20:52:30.356528    2943 cri.go:76] found id: ""
	I0813 20:52:30.356561    2943 logs.go:270] 0 containers: []
	W0813 20:52:30.356572    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 20:52:30.356581    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 20:52:30.356659    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 20:52:30.400590    2943 cri.go:76] found id: ""
	I0813 20:52:30.400624    2943 logs.go:270] 0 containers: []
	W0813 20:52:30.400633    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 20:52:30.400644    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 20:52:30.400712    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 20:52:30.438833    2943 cri.go:76] found id: "78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:30.438883    2943 cri.go:76] found id: ""
	I0813 20:52:30.438892    2943 logs.go:270] 1 containers: [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009]
	I0813 20:52:30.438953    2943 ssh_runner.go:149] Run: which crictl
	I0813 20:52:30.444684    2943 logs.go:123] Gathering logs for container status ...
	I0813 20:52:30.444710    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 20:52:30.492571    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 20:52:30.492603    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 20:52:30.571940    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 20:52:30.571976    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 20:52:30.586133    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 20:52:30.586160    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 20:52:30.655911    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 20:52:30.655941    2943 logs.go:123] Gathering logs for kube-apiserver [8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174] ...
	I0813 20:52:30.655955    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8aadf1446a5512fc5a3bf88b8f3d41225c3014447fb8f7917e6a5c1b65bf6174"
	I0813 20:52:30.698374    2943 logs.go:123] Gathering logs for kube-scheduler [ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f] ...
	I0813 20:52:30.698405    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ba3ae86bd837360be7f6ac154a370ecca49fe8a6f42137a2c61f1ccd64aa242f"
	I0813 20:52:30.776662    2943 logs.go:123] Gathering logs for kube-controller-manager [78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009] ...
	I0813 20:52:30.776723    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 78a4e696050a3509c5334170fcb4934847049ec95901d088e3c46aef3855e009"
	I0813 20:52:30.833958    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 20:52:30.833993    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 20:52:33.632133    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:52:33.632746    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:52:34.078990    2943 kubeadm.go:604] restartCluster took 4m12.028533229s
	W0813 20:52:34.079153    2943 out.go:242] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	I0813 20:52:34.079195    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 20:52:39.628383    2943 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.549158561s)
	I0813 20:52:39.628459    2943 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:52:39.644912    2943 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:52:39.644997    2943 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:52:39.685884    2943 cri.go:76] found id: ""
	I0813 20:52:39.685961    2943 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:52:39.694094    2943 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:52:39.701285    2943 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:52:39.701329    2943 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 20:56:55.733821    2943 out.go:204]   - Generating certificates and keys ...
	I0813 20:56:55.737021    2943 out.go:204]   - Booting up control plane ...
	W0813 20:56:55.739048    2943 out.go:242] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.22.0-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.22.0-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0813 20:56:55.739103    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 20:56:57.512375    2943 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.773237341s)
	I0813 20:56:57.512457    2943 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:56:57.526653    2943 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:56:57.526750    2943 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:56:57.561685    2943 cri.go:76] found id: ""
	I0813 20:56:57.561829    2943 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:56:57.569109    2943 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:56:57.569153    2943 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 20:56:58.108527    2943 out.go:204]   - Generating certificates and keys ...
	I0813 20:56:58.954974    2943 out.go:204]   - Booting up control plane ...
	I0813 21:00:58.981049    2943 kubeadm.go:392] StartCluster complete in 12m36.97781251s
	I0813 21:00:58.981109    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0813 21:00:58.981198    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 21:00:59.031245    2943 cri.go:76] found id: "cc8b9155767cc7fbb6cd98371277d0ca5a9d377963f7404c7555a5960998bf08"
	I0813 21:00:59.031285    2943 cri.go:76] found id: ""
	I0813 21:00:59.031296    2943 logs.go:270] 1 containers: [cc8b9155767cc7fbb6cd98371277d0ca5a9d377963f7404c7555a5960998bf08]
	I0813 21:00:59.031387    2943 ssh_runner.go:149] Run: which crictl
	I0813 21:00:59.037606    2943 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0813 21:00:59.037679    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 21:00:59.078183    2943 cri.go:76] found id: ""
	I0813 21:00:59.078214    2943 logs.go:270] 0 containers: []
	W0813 21:00:59.078222    2943 logs.go:272] No container was found matching "etcd"
	I0813 21:00:59.078230    2943 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0813 21:00:59.078294    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 21:00:59.119265    2943 cri.go:76] found id: ""
	I0813 21:00:59.119292    2943 logs.go:270] 0 containers: []
	W0813 21:00:59.119298    2943 logs.go:272] No container was found matching "coredns"
	I0813 21:00:59.119305    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0813 21:00:59.119367    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 21:00:59.156147    2943 cri.go:76] found id: "e3d39a4948ef1b24cdbfd1d2d292a2528188f6ee28aad6ec26d69da08481e796"
	I0813 21:00:59.156178    2943 cri.go:76] found id: ""
	I0813 21:00:59.156187    2943 logs.go:270] 1 containers: [e3d39a4948ef1b24cdbfd1d2d292a2528188f6ee28aad6ec26d69da08481e796]
	I0813 21:00:59.156250    2943 ssh_runner.go:149] Run: which crictl
	I0813 21:00:59.160723    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0813 21:00:59.160794    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 21:00:59.201574    2943 cri.go:76] found id: ""
	I0813 21:00:59.201614    2943 logs.go:270] 0 containers: []
	W0813 21:00:59.201623    2943 logs.go:272] No container was found matching "kube-proxy"
	I0813 21:00:59.201631    2943 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 21:00:59.201687    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 21:00:59.250418    2943 cri.go:76] found id: ""
	I0813 21:00:59.250441    2943 logs.go:270] 0 containers: []
	W0813 21:00:59.250449    2943 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 21:00:59.250458    2943 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0813 21:00:59.250520    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 21:00:59.288736    2943 cri.go:76] found id: ""
	I0813 21:00:59.288767    2943 logs.go:270] 0 containers: []
	W0813 21:00:59.288776    2943 logs.go:272] No container was found matching "storage-provisioner"
	I0813 21:00:59.288786    2943 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 21:00:59.288862    2943 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 21:00:59.325745    2943 cri.go:76] found id: "8fcfe96b47620a85746f58ab868d4a16cdaf31c42d056d7f62494af1ba62b224"
	I0813 21:00:59.325774    2943 cri.go:76] found id: ""
	I0813 21:00:59.325781    2943 logs.go:270] 1 containers: [8fcfe96b47620a85746f58ab868d4a16cdaf31c42d056d7f62494af1ba62b224]
	I0813 21:00:59.325842    2943 ssh_runner.go:149] Run: which crictl
	I0813 21:00:59.331146    2943 logs.go:123] Gathering logs for kubelet ...
	I0813 21:00:59.331179    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 21:00:59.403386    2943 logs.go:123] Gathering logs for dmesg ...
	I0813 21:00:59.403416    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 21:00:59.415506    2943 logs.go:123] Gathering logs for describe nodes ...
	I0813 21:00:59.415531    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0813 21:00:59.486387    2943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0813 21:00:59.486417    2943 logs.go:123] Gathering logs for kube-apiserver [cc8b9155767cc7fbb6cd98371277d0ca5a9d377963f7404c7555a5960998bf08] ...
	I0813 21:00:59.486432    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 cc8b9155767cc7fbb6cd98371277d0ca5a9d377963f7404c7555a5960998bf08"
	I0813 21:00:59.526474    2943 logs.go:123] Gathering logs for kube-scheduler [e3d39a4948ef1b24cdbfd1d2d292a2528188f6ee28aad6ec26d69da08481e796] ...
	I0813 21:00:59.526504    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 e3d39a4948ef1b24cdbfd1d2d292a2528188f6ee28aad6ec26d69da08481e796"
	I0813 21:00:59.597146    2943 logs.go:123] Gathering logs for kube-controller-manager [8fcfe96b47620a85746f58ab868d4a16cdaf31c42d056d7f62494af1ba62b224] ...
	I0813 21:00:59.597181    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8fcfe96b47620a85746f58ab868d4a16cdaf31c42d056d7f62494af1ba62b224"
	I0813 21:00:59.652993    2943 logs.go:123] Gathering logs for CRI-O ...
	I0813 21:00:59.653021    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0813 21:00:59.900198    2943 logs.go:123] Gathering logs for container status ...
	I0813 21:00:59.900232    2943 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0813 21:00:59.943307    2943 out.go:371] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.22.0-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0813 21:00:59.943341    2943 out.go:242] * 
	* 
	W0813 21:00:59.943541    2943 out.go:242] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.22.0-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.22.0-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0813 21:00:59.943569    2943 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 21:00:59.945571    2943 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                        │
	│                                                                                                                                                      │
	│    * Please attach the following file to the GitHub issue:                                                                                           │
	│    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                        │
	│                                                                                                                                                      │
	│    * Please attach the following file to the GitHub issue:                                                                                           │
	│    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 21:00:59.948704    2943 out.go:177] 
	W0813 21:00:59.948876    2943 out.go:242] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.22.0-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.22.0-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0813 21:00:59.948996    2943 out.go:242] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0813 21:00:59.949062    2943 out.go:242] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0813 21:00:59.950753    2943 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:247: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813204600-30853 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio : exit status 109
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210813204600-30853 version --output=json
version_upgrade_test.go:250: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20210813204600-30853 version --output=json: exit status 1 (53.582827ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "20",
	    "gitVersion": "v1.20.5",
	    "gitCommit": "6b1d87acf3c8253c123756b9e61dac642678305f",
	    "gitTreeState": "clean",
	    "buildDate": "2021-03-18T01:10:43Z",
	    "goVersion": "go1.15.8",
	    "compiler": "gc",
	    "platform": "linux/amd64"
	  }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.50.24:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:252: error running kubectl: exit status 1
panic.go:613: *** TestKubernetesUpgrade FAILED at 2021-08-13 21:01:00.172981601 +0000 UTC m=+3192.177546709
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20210813204600-30853 -n kubernetes-upgrade-20210813204600-30853

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20210813204600-30853 -n kubernetes-upgrade-20210813204600-30853: exit status 2 (258.255542ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210813204600-30853 logs -n 25

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210813204600-30853 logs -n 25: exit status 110 (895.158377ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | stopped-upgrade-20210813204857-30853    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:52 UTC | Fri, 13 Aug 2021 20:54:43 UTC |
	|         | stopped-upgrade-20210813204857-30853              |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                                |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                         |         |         |                               |                               |
	| logs    | -p                                                | stopped-upgrade-20210813204857-30853    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:43 UTC | Fri, 13 Aug 2021 20:54:44 UTC |
	|         | stopped-upgrade-20210813204857-30853              |                                         |         |         |                               |                               |
	| delete  | -p                                                | stopped-upgrade-20210813204857-30853    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:44 UTC | Fri, 13 Aug 2021 20:54:45 UTC |
	|         | stopped-upgrade-20210813204857-30853              |                                         |         |         |                               |                               |
	| start   | -p cilium-20210813204704-30853                    | cilium-20210813204704-30853             | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:21 UTC | Fri, 13 Aug 2021 20:55:53 UTC |
	|         | --memory=2048                                     |                                         |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                               |                               |
	|         | --cni=cilium --driver=kvm2                        |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                         |         |         |                               |                               |
	| ssh     | -p cilium-20210813204704-30853                    | cilium-20210813204704-30853             | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:58 UTC | Fri, 13 Aug 2021 20:55:59 UTC |
	|         | pgrep -a kubelet                                  |                                         |         |         |                               |                               |
	| delete  | -p cilium-20210813204704-30853                    | cilium-20210813204704-30853             | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:56:13 UTC | Fri, 13 Aug 2021 20:56:15 UTC |
	| start   | -p calico-20210813204704-30853                    | calico-20210813204704-30853             | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:17 UTC | Fri, 13 Aug 2021 20:56:24 UTC |
	|         | --memory=2048                                     |                                         |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                               |                               |
	|         | --cni=calico --driver=kvm2                        |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                         |         |         |                               |                               |
	| ssh     | -p calico-20210813204704-30853                    | calico-20210813204704-30853             | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:56:30 UTC | Fri, 13 Aug 2021 20:56:30 UTC |
	|         | pgrep -a kubelet                                  |                                         |         |         |                               |                               |
	| start   | -p                                                | custom-weave-20210813204704-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:45 UTC | Fri, 13 Aug 2021 20:56:44 UTC |
	|         | custom-weave-20210813204704-30853                 |                                         |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                               |                               |
	|         | --cni=testdata/weavenet.yaml                      |                                         |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                         |         |         |                               |                               |
	| ssh     | -p                                                | custom-weave-20210813204704-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:56:44 UTC | Fri, 13 Aug 2021 20:56:44 UTC |
	|         | custom-weave-20210813204704-30853                 |                                         |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                         |         |         |                               |                               |
	| delete  | -p calico-20210813204704-30853                    | calico-20210813204704-30853             | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:56:44 UTC | Fri, 13 Aug 2021 20:56:51 UTC |
	| delete  | -p                                                | custom-weave-20210813204704-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:02 UTC | Fri, 13 Aug 2021 20:57:03 UTC |
	|         | custom-weave-20210813204704-30853                 |                                         |         |         |                               |                               |
	| start   | -p                                                | enable-default-cni-20210813204703-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:56:15 UTC | Fri, 13 Aug 2021 20:58:08 UTC |
	|         | enable-default-cni-20210813204703-30853           |                                         |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                               |                               |
	|         | --enable-default-cni=true --driver=kvm2           |                                         |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                         |         |         |                               |                               |
	| ssh     | -p                                                | enable-default-cni-20210813204703-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:08 UTC | Fri, 13 Aug 2021 20:58:08 UTC |
	|         | enable-default-cni-20210813204703-30853           |                                         |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                         |         |         |                               |                               |
	| delete  | -p                                                | enable-default-cni-20210813204703-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:22 UTC | Fri, 13 Aug 2021 20:58:23 UTC |
	|         | enable-default-cni-20210813204703-30853           |                                         |         |         |                               |                               |
	| start   | -p                                                | flannel-20210813204703-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:56:51 UTC | Fri, 13 Aug 2021 20:58:55 UTC |
	|         | flannel-20210813204703-30853                      |                                         |         |         |                               |                               |
	|         | --memory=2048                                     |                                         |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                               |                               |
	|         | --cni=flannel --driver=kvm2                       |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                         |         |         |                               |                               |
	| start   | -p bridge-20210813204703-30853                    | bridge-20210813204703-30853             | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:03 UTC | Fri, 13 Aug 2021 20:59:00 UTC |
	|         | --memory=2048                                     |                                         |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                               |                               |
	|         | --cni=bridge --driver=kvm2                        |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                         |         |         |                               |                               |
	| ssh     | -p bridge-20210813204703-30853                    | bridge-20210813204703-30853             | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:00 UTC | Fri, 13 Aug 2021 20:59:00 UTC |
	|         | pgrep -a kubelet                                  |                                         |         |         |                               |                               |
	| ssh     | -p                                                | flannel-20210813204703-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:03 UTC | Fri, 13 Aug 2021 20:59:03 UTC |
	|         | flannel-20210813204703-30853                      |                                         |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                         |         |         |                               |                               |
	| delete  | -p bridge-20210813204703-30853                    | bridge-20210813204703-30853             | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:14 UTC | Fri, 13 Aug 2021 20:59:15 UTC |
	| delete  | -p                                                | flannel-20210813204703-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:15 UTC | Fri, 13 Aug 2021 20:59:17 UTC |
	|         | flannel-20210813204703-30853                      |                                         |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205823-30853    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:23 UTC | Fri, 13 Aug 2021 21:00:44 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                         |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                         |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                         |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813205823-30853    | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:56 UTC | Fri, 13 Aug 2021 21:00:57 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                         |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813205823-30853    | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:57 UTC | Fri, 13 Aug 2021 21:01:00 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813205823-30853    | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:00 UTC | Fri, 13 Aug 2021 21:01:00 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                         |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                               |                               |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:59:17
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:59:17.281488    9572 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:59:17.281604    9572 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:59:17.281655    9572 out.go:311] Setting ErrFile to fd 2...
	I0813 20:59:17.281659    9572 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:59:17.281820    9572 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:59:17.282238    9572 out.go:305] Setting JSON to false
	I0813 20:59:17.335978    9572 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":9719,"bootTime":1628878638,"procs":175,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:59:17.336131    9572 start.go:121] virtualization: kvm guest
	I0813 20:59:17.339767    9572 out.go:177] * [embed-certs-20210813205917-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:59:17.343379    9572 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:59:17.339946    9572 notify.go:169] Checking for updates...
	I0813 20:59:17.346300    9572 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:59:17.365092    9572 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:59:17.374337    9572 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:59:17.375474    9572 config.go:177] Loaded profile config "kubernetes-upgrade-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:59:17.375618    9572 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:59:17.375730    9572 config.go:177] Loaded profile config "old-k8s-version-20210813205823-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:59:17.375786    9572 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:59:17.686357    9572 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 20:59:17.686398    9572 start.go:278] selected driver: kvm2
	I0813 20:59:17.686405    9572 start.go:751] validating driver "kvm2" against <nil>
	I0813 20:59:17.686429    9572 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:59:17.687718    9572 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:59:17.687882    9572 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:59:17.711844    9572 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:59:17.711971    9572 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:59:17.712200    9572 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:59:17.712240    9572 cni.go:93] Creating CNI manager for ""
	I0813 20:59:17.712257    9572 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:59:17.712267    9572 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 20:59:17.712284    9572 start_flags.go:277] config:
	{Name:embed-certs-20210813205917-30853 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210813205917-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:59:17.712486    9572 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:59:17.538892    8936 out.go:204]   - Configuring RBAC rules ...
	I0813 20:59:18.040843    8936 cni.go:93] Creating CNI manager for ""
	I0813 20:59:18.040880    8936 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:59:18.042541    8936 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 20:59:18.042615    8936 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 20:59:18.055064    8936 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 20:59:18.101796    8936 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:59:18.101936    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:18.102039    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=old-k8s-version-20210813205823-30853 minikube.k8s.io/updated_at=2021_08_13T20_59_18_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:15.561385    9422 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 20:59:15.559295    9422 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 20:59:15.559375    9422 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 20:59:15.561553    9422 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:59:15.561588    9422 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:15.560085    9422 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.22.0-rc.0: Error response from daemon: reference does not exist
	I0813 20:59:15.576947    9422 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37743
	I0813 20:59:15.583167    9422 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:15.584536    9422 main.go:130] libmachine: Using API Version  1
	I0813 20:59:15.584560    9422 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:15.584730    9422 image.go:171] found k8s.gcr.io/pause:3.4.1 locally: &{Image:0xc0001c0460}
	I0813 20:59:15.584759    9422 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1
	I0813 20:59:15.585011    9422 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:15.585226    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetMachineName
	I0813 20:59:15.585651    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 20:59:15.585862    9422 start.go:160] libmachine.API.Create for "no-preload-20210813205915-30853" (driver="kvm2")
	I0813 20:59:15.585894    9422 client.go:168] LocalClient.Create starting
	I0813 20:59:15.585928    9422 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:59:15.585960    9422 main.go:130] libmachine: Decoding PEM data...
	I0813 20:59:15.585985    9422 main.go:130] libmachine: Parsing certificate...
	I0813 20:59:15.586133    9422 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:59:15.586156    9422 main.go:130] libmachine: Decoding PEM data...
	I0813 20:59:15.586176    9422 main.go:130] libmachine: Parsing certificate...
	I0813 20:59:15.586233    9422 main.go:130] libmachine: Running pre-create checks...
	I0813 20:59:15.586259    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .PreCreateCheck
	I0813 20:59:15.586611    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetConfigRaw
	I0813 20:59:15.587139    9422 main.go:130] libmachine: Creating machine...
	I0813 20:59:15.587162    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Create
	I0813 20:59:15.587302    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Creating KVM machine...
	I0813 20:59:15.591034    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found existing default KVM network
	I0813 20:59:15.594502    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:15.594344    9456 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e0:e4:09}}
	I0813 20:59:15.596700    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:15.596627    9456 network.go:240] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:8f:e3:57}}
	I0813 20:59:15.598114    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:15.598021    9456 network.go:240] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:95:e9:3e}}
	I0813 20:59:15.599356    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:15.599288    9456 network.go:240] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:6e:16:57}}
	I0813 20:59:15.603559    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:15.603468    9456 network.go:240] skipping subnet 192.168.83.0/24 that is taken: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 Interface:{IfaceName:virbr5 IfaceIPv4:192.168.83.1 IfaceMTU:1500 IfaceMAC:52:54:00:10:be:75}}
	I0813 20:59:15.604825    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:15.604751    9456 network.go:240] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName:virbr6 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:52:54:00:df:e7:31}}
	I0813 20:59:15.606610    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:15.606520    9456 network.go:288] reserving subnet 192.168.105.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.105.0:0xc000190028] misses:0}
	I0813 20:59:15.606647    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:15.606562    9456 network.go:235] using free private subnet 192.168.105.0/24: &{IP:192.168.105.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.105.0/24 Gateway:192.168.105.1 ClientMin:192.168.105.2 ClientMax:192.168.105.254 Broadcast:192.168.105.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:59:15.633583    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | trying to create private KVM network mk-no-preload-20210813205915-30853 192.168.105.0/24...
	I0813 20:59:15.640031    9422 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0813 20:59:15.640071    9422 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 81.368804ms
	I0813 20:59:15.640091    9422 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0813 20:59:15.952544    9422 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0
	I0813 20:59:16.088921    9422 image.go:171] found k8s.gcr.io/coredns/coredns:v1.8.0 locally: &{Image:0xc0001185e0}
	I0813 20:59:16.088961    9422 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0
	I0813 20:59:16.437123    9422 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0813 20:59:16.437191    9422 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 878.159792ms
	I0813 20:59:16.437215    9422 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0813 20:59:16.518980    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | private KVM network mk-no-preload-20210813205915-30853 192.168.105.0/24 created
	I0813 20:59:16.519088    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853 ...
	I0813 20:59:16.519116    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:16.496676    9456 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:59:16.519150    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 20:59:16.519183    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 20:59:16.729087    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:16.728948    9456 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa...
	I0813 20:59:16.902179    9422 image.go:171] found k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 locally: &{Image:0xc000c4e3c0}
	I0813 20:59:16.902236    9422 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0
	I0813 20:59:17.182925    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:17.182765    9456 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/no-preload-20210813205915-30853.rawdisk...
	I0813 20:59:17.182965    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Writing magic tar header
	I0813 20:59:17.182990    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Writing SSH key tar header
	I0813 20:59:17.183013    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:17.182914    9456 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853 ...
	I0813 20:59:17.183084    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853
	I0813 20:59:17.183186    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853 (perms=drwx------)
	I0813 20:59:17.183247    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 20:59:17.183280    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 20:59:17.183299    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:59:17.183326    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337
	I0813 20:59:17.183341    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 20:59:17.183360    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 20:59:17.183374    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Checking permissions on dir: /home/jenkins
	I0813 20:59:17.183392    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 20:59:17.183411    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 20:59:17.183425    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 20:59:17.183435    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Checking permissions on dir: /home
	I0813 20:59:17.183448    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Skipping /home - not owner
	I0813 20:59:17.183463    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Creating domain...
	I0813 20:59:17.213482    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:df:f4:fc in network default
	I0813 20:59:17.214182    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:17.214216    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring networks are active...
	I0813 20:59:17.216797    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring network default is active
	I0813 20:59:17.217208    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring network mk-no-preload-20210813205915-30853 is active
	I0813 20:59:17.218280    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Getting domain xml...
	I0813 20:59:17.220501    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Creating domain...
	I0813 20:59:17.790736    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Waiting to get IP...
	I0813 20:59:17.795106    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:17.795310    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | unable to find current IP address of domain no-preload-20210813205915-30853 in network mk-no-preload-20210813205915-30853
	I0813 20:59:17.795368    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:17.790740    9456 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 20:59:18.055315    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:18.055665    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | unable to find current IP address of domain no-preload-20210813205915-30853 in network mk-no-preload-20210813205915-30853
	I0813 20:59:18.055706    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:18.055594    9456 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 20:59:18.225673    9422 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 locally: &{Image:0xc0005904a0}
	I0813 20:59:18.231851    9422 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0
	I0813 20:59:18.367033    9422 image.go:171] found k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 locally: &{Image:0xc000118880}
	I0813 20:59:18.367136    9422 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0
	I0813 20:59:18.444411    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:18.444463    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | unable to find current IP address of domain no-preload-20210813205915-30853 in network mk-no-preload-20210813205915-30853
	I0813 20:59:18.444479    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:18.444413    9456 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 20:59:18.871974    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:18.872509    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | unable to find current IP address of domain no-preload-20210813205915-30853 in network mk-no-preload-20210813205915-30853
	I0813 20:59:18.872560    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:18.872479    9456 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 20:59:19.231224    9422 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0813 20:59:19.231286    9422 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 3.672547939s
	I0813 20:59:19.231309    9422 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0813 20:59:19.345929    9422 image.go:171] found k8s.gcr.io/etcd:3.4.13-3 locally: &{Image:0xc0001c04a0}
	I0813 20:59:19.345979    9422 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3
	I0813 20:59:19.347085    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:19.347591    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | unable to find current IP address of domain no-preload-20210813205915-30853 in network mk-no-preload-20210813205915-30853
	I0813 20:59:19.347620    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:19.347591    9456 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 20:59:19.936677    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:19.937567    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | unable to find current IP address of domain no-preload-20210813205915-30853 in network mk-no-preload-20210813205915-30853
	I0813 20:59:19.937593    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:19.937501    9456 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 20:59:17.714791    9572 out.go:177] * Starting control plane node embed-certs-20210813205917-30853 in cluster embed-certs-20210813205917-30853
	I0813 20:59:17.714822    9572 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:59:17.714889    9572 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:59:17.714924    9572 cache.go:56] Caching tarball of preloaded images
	I0813 20:59:17.715037    9572 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:59:17.715067    9572 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:59:17.715224    9572 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813205917-30853/config.json ...
	I0813 20:59:17.715256    9572 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813205917-30853/config.json: {Name:mk82f6d869744cc63b2b79b85936caa6ab6254d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:17.715411    9572 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:59:17.715449    9572 start.go:313] acquiring machines lock for embed-certs-20210813205917-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:59:19.282536    8936 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (1.18069523s)
	I0813 20:59:19.282568    8936 ops.go:34] apiserver oom_adj: 16
	I0813 20:59:19.282574    8936 ops.go:39] adjusting apiserver oom_adj to -10
	I0813 20:59:19.282583    8936 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:59:19.282633    8936 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=old-k8s-version-20210813205823-30853 minikube.k8s.io/updated_at=2021_08_13T20_59_18_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.180577726s)
	I0813 20:59:19.282671    8936 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1.180703852s)
	I0813 20:59:19.282718    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:19.914739    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:20.414969    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:20.914248    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:21.414828    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:21.914254    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:22.414447    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:22.915084    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:23.415035    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:20.671277    9422 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0813 20:59:20.671328    9422 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 5.112529071s
	I0813 20:59:20.671343    9422 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0813 20:59:20.773113    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:20.773632    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | unable to find current IP address of domain no-preload-20210813205915-30853 in network mk-no-preload-20210813205915-30853
	I0813 20:59:20.773721    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:20.773600    9456 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 20:59:21.521214    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:21.521805    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | unable to find current IP address of domain no-preload-20210813205915-30853 in network mk-no-preload-20210813205915-30853
	I0813 20:59:21.521830    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:21.521719    9456 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 20:59:22.510277    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:22.510960    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | unable to find current IP address of domain no-preload-20210813205915-30853 in network mk-no-preload-20210813205915-30853
	I0813 20:59:22.510997    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:22.510916    9456 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 20:59:23.702203    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:23.702936    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | unable to find current IP address of domain no-preload-20210813205915-30853 in network mk-no-preload-20210813205915-30853
	I0813 20:59:23.702964    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:23.702728    9456 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 20:59:24.831412    9422 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0813 20:59:24.831462    9422 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 9.272675277s
	I0813 20:59:24.831483    9422 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0813 20:59:25.309873    9422 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0813 20:59:25.309923    9422 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 9.75078586s
	I0813 20:59:25.309954    9422 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0813 20:59:25.381023    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:25.381621    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | unable to find current IP address of domain no-preload-20210813205915-30853 in network mk-no-preload-20210813205915-30853
	I0813 20:59:25.381647    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:25.381598    9456 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 20:59:23.914614    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:24.414247    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:24.914625    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:25.414215    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:25.914661    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:26.414695    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:26.915178    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:27.414331    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:27.914335    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:28.414865    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:27.728884    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:27.729563    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | unable to find current IP address of domain no-preload-20210813205915-30853 in network mk-no-preload-20210813205915-30853
	I0813 20:59:27.729600    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:27.729431    9456 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 20:59:28.914982    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:29.414925    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:29.915192    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:30.414250    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:30.914242    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:31.414871    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:31.914937    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:32.414189    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:32.914645    8936 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:59:33.025896    8936 kubeadm.go:985] duration metric: took 14.92401594s to wait for elevateKubeSystemPrivileges.
	I0813 20:59:33.025934    8936 kubeadm.go:392] StartCluster complete in 33.298926686s
	I0813 20:59:33.025958    8936 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:33.026088    8936 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:59:33.027399    8936 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:33.552607    8936 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210813205823-30853" rescaled to 1
	I0813 20:59:33.552680    8936 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.83.49 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0813 20:59:33.554807    8936 out.go:177] * Verifying Kubernetes components...
	I0813 20:59:33.552742    8936 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:59:33.552786    8936 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:59:33.552938    8936 config.go:177] Loaded profile config "old-k8s-version-20210813205823-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:59:33.555018    8936 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210813205823-30853"
	I0813 20:59:33.555036    8936 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210813205823-30853"
	W0813 20:59:33.555044    8936 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:59:33.554893    8936 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:59:33.555073    8936 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 20:59:33.555101    8936 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210813205823-30853"
	I0813 20:59:33.555126    8936 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210813205823-30853"
	I0813 20:59:33.555580    8936 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:59:33.555623    8936 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:33.555580    8936 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:59:33.555706    8936 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:33.571671    8936 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0813 20:59:33.572165    8936 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:33.575311    8936 main.go:130] libmachine: Using API Version  1
	I0813 20:59:33.575338    8936 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:33.575912    8936 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:33.576095    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 20:59:33.582053    8936 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41113
	I0813 20:59:33.582551    8936 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:33.583060    8936 main.go:130] libmachine: Using API Version  1
	I0813 20:59:33.583084    8936 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:33.583530    8936 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:33.584123    8936 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:59:33.584167    8936 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:33.587935    8936 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210813205823-30853"
	W0813 20:59:33.587958    8936 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:59:33.587990    8936 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 20:59:33.588447    8936 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:59:33.588481    8936 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:33.597655    8936 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44941
	I0813 20:59:33.598159    8936 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:33.598703    8936 main.go:130] libmachine: Using API Version  1
	I0813 20:59:33.598734    8936 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:33.599171    8936 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:33.599377    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 20:59:33.603123    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 20:59:33.605061    8936 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:59:33.605185    8936 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:59:33.605202    8936 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:59:33.605223    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 20:59:33.603528    8936 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33889
	I0813 20:59:33.605927    8936 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:33.606466    8936 main.go:130] libmachine: Using API Version  1
	I0813 20:59:33.606482    8936 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:33.606993    8936 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:33.607582    8936 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:59:33.607621    8936 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:33.615735    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 20:59:33.616161    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 21:58:39 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 20:59:33.616181    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 20:59:33.616434    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 20:59:33.616609    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 20:59:33.616803    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 20:59:33.616967    8936 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 20:59:33.623015    8936 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44909
	I0813 20:59:33.623526    8936 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:33.624036    8936 main.go:130] libmachine: Using API Version  1
	I0813 20:59:33.624061    8936 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:33.624484    8936 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:33.625211    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 20:59:33.628445    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 20:59:33.628672    8936 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:59:33.628687    8936 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:59:33.628705    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 20:59:33.634962    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 20:59:33.635497    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 21:58:39 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 20:59:33.635526    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 20:59:33.635821    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 20:59:33.635972    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 20:59:33.636088    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 20:59:33.636213    8936 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 20:59:33.783254    8936 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:59:33.783790    8936 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:59:33.785074    8936 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210813205823-30853" to be "Ready" ...
	I0813 20:59:33.790725    8936 node_ready.go:49] node "old-k8s-version-20210813205823-30853" has status "Ready":"True"
	I0813 20:59:33.790744    8936 node_ready.go:38] duration metric: took 5.648165ms waiting for node "old-k8s-version-20210813205823-30853" to be "Ready" ...
	I0813 20:59:33.790755    8936 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:59:33.801712    8936 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-7splf" in "kube-system" namespace to be "Ready" ...
	I0813 20:59:33.813697    8936 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:59:34.680743    8936 start.go:728] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS
	I0813 20:59:34.930787    8936 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.146956586s)
	I0813 20:59:34.930838    8936 main.go:130] libmachine: Making call to close driver server
	I0813 20:59:34.930865    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 20:59:34.930913    8936 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.117189545s)
	I0813 20:59:34.930943    8936 main.go:130] libmachine: Making call to close driver server
	I0813 20:59:34.930954    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 20:59:34.931143    8936 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:59:34.931163    8936 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:59:34.931174    8936 main.go:130] libmachine: Making call to close driver server
	I0813 20:59:34.931185    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 20:59:34.931296    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 20:59:34.931321    8936 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:59:34.931338    8936 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:59:34.931352    8936 main.go:130] libmachine: Making call to close driver server
	I0813 20:59:34.931367    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 20:59:34.931387    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 20:59:34.931426    8936 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:59:34.931442    8936 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:59:34.933285    8936 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:59:34.933318    8936 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:59:34.933332    8936 main.go:130] libmachine: Making call to close driver server
	I0813 20:59:34.933343    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 20:59:34.933349    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 20:59:34.933782    8936 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 20:59:34.933841    8936 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:59:34.933867    8936 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:59:31.097443    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:31.097947    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | unable to find current IP address of domain no-preload-20210813205915-30853 in network mk-no-preload-20210813205915-30853
	I0813 20:59:31.097981    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | I0813 20:59:31.097899    9456 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0813 20:59:34.188029    9422 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0813 20:59:34.188086    9422 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 18.629192776s
	I0813 20:59:34.188105    9422 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0813 20:59:34.188152    9422 cache.go:88] Successfully saved all images to host disk.
	I0813 20:59:34.217355    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:34.217832    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Found IP for machine: 192.168.105.107
	I0813 20:59:34.217861    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Reserving static IP address...
	I0813 20:59:34.217879    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has current primary IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:34.218108    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | unable to find host DHCP lease matching {name: "no-preload-20210813205915-30853", mac: "52:54:00:60:d2:3d", ip: "192.168.105.107"} in network mk-no-preload-20210813205915-30853
	I0813 20:59:34.266825    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Reserved static IP address: 192.168.105.107
	I0813 20:59:34.266902    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Waiting for SSH to be available...
	I0813 20:59:34.266913    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Getting to WaitForSSH function...
	I0813 20:59:34.272454    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:34.272779    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 21:59:33 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:minikube Clientid:01:52:54:00:60:d2:3d}
	I0813 20:59:34.272807    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 20:59:34.272974    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH client type: external
	I0813 20:59:34.273000    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa (-rw-------)
	I0813 20:59:34.273041    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 20:59:34.273057    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | About to run SSH command:
	I0813 20:59:34.273110    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | exit 0
	I0813 20:59:34.414704    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 20:59:34.415258    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) KVM machine creation complete!
	I0813 20:59:34.415334    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetConfigRaw
	I0813 20:59:34.415997    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 20:59:34.416204    9422 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 20:59:34.416432    9422 main.go:130] libmac
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:47:56 UTC, end at Fri 2021-08-13 21:01:00 UTC. --
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.190452365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fcfe96b47620a85746f58ab868d4a16cdaf31c42d056d7f62494af1ba62b224,PodSandboxId:5ba609500cc7c0e196e8a53f9269e643881216309054f516ca399aea55b7b24b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:14,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628888418967297625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cfac10cafa97cf114360b0b6669f20f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cf90c5cb,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8b9155767cc7fbb6cd98371277d0ca5a9d377963f7404c7555a5960998bf08,PodSandboxId:5964fa6dc4aa11e359e914ada869b2558a77ed8fa949557ddd315eb38420f87e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:14,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_EXITED,CreatedAt:1628888414743377804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f4a3beda8c56e9e73ea7dd272ae7b3,},Annotations:map[string]string{io.kubernetes
.container.hash: 2e11437d,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d39a4948ef1b24cdbfd1d2d292a2528188f6ee28aad6ec26d69da08481e796,PodSandboxId:982d945aaab7aa9c95c4a584b5fa02c1d47cd18be04fc9f1e3ebc910e28aea78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628888226230185005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f62af0defa89a63920e12cec2b073ed7,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=134aff5f-3e85-423d-a2f9-da26decf33ab name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.711960708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b234a26e-e17a-4fa5-af9b-ffe9c71de8af name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.712108076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b234a26e-e17a-4fa5-af9b-ffe9c71de8af name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.712210115Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fcfe96b47620a85746f58ab868d4a16cdaf31c42d056d7f62494af1ba62b224,PodSandboxId:5ba609500cc7c0e196e8a53f9269e643881216309054f516ca399aea55b7b24b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:14,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628888418967297625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cfac10cafa97cf114360b0b6669f20f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cf90c5cb,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8b9155767cc7fbb6cd98371277d0ca5a9d377963f7404c7555a5960998bf08,PodSandboxId:5964fa6dc4aa11e359e914ada869b2558a77ed8fa949557ddd315eb38420f87e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:14,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_EXITED,CreatedAt:1628888414743377804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f4a3beda8c56e9e73ea7dd272ae7b3,},Annotations:map[string]string{io.kubernetes
.container.hash: 2e11437d,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d39a4948ef1b24cdbfd1d2d292a2528188f6ee28aad6ec26d69da08481e796,PodSandboxId:982d945aaab7aa9c95c4a584b5fa02c1d47cd18be04fc9f1e3ebc910e28aea78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628888226230185005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f62af0defa89a63920e12cec2b073ed7,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b234a26e-e17a-4fa5-af9b-ffe9c71de8af name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.751742520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4e5abede-fcf2-4260-a3b3-e70667838a58 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.751986789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4e5abede-fcf2-4260-a3b3-e70667838a58 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.752176048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fcfe96b47620a85746f58ab868d4a16cdaf31c42d056d7f62494af1ba62b224,PodSandboxId:5ba609500cc7c0e196e8a53f9269e643881216309054f516ca399aea55b7b24b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:14,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628888418967297625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cfac10cafa97cf114360b0b6669f20f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cf90c5cb,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8b9155767cc7fbb6cd98371277d0ca5a9d377963f7404c7555a5960998bf08,PodSandboxId:5964fa6dc4aa11e359e914ada869b2558a77ed8fa949557ddd315eb38420f87e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:14,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_EXITED,CreatedAt:1628888414743377804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f4a3beda8c56e9e73ea7dd272ae7b3,},Annotations:map[string]string{io.kubernetes
.container.hash: 2e11437d,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d39a4948ef1b24cdbfd1d2d292a2528188f6ee28aad6ec26d69da08481e796,PodSandboxId:982d945aaab7aa9c95c4a584b5fa02c1d47cd18be04fc9f1e3ebc910e28aea78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628888226230185005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f62af0defa89a63920e12cec2b073ed7,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4e5abede-fcf2-4260-a3b3-e70667838a58 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.788134964Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7cb0306a-91f3-419c-b2d8-29f47f6e39c9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.788215561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7cb0306a-91f3-419c-b2d8-29f47f6e39c9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.788324375Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fcfe96b47620a85746f58ab868d4a16cdaf31c42d056d7f62494af1ba62b224,PodSandboxId:5ba609500cc7c0e196e8a53f9269e643881216309054f516ca399aea55b7b24b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:14,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628888418967297625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cfac10cafa97cf114360b0b6669f20f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cf90c5cb,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8b9155767cc7fbb6cd98371277d0ca5a9d377963f7404c7555a5960998bf08,PodSandboxId:5964fa6dc4aa11e359e914ada869b2558a77ed8fa949557ddd315eb38420f87e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:14,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_EXITED,CreatedAt:1628888414743377804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f4a3beda8c56e9e73ea7dd272ae7b3,},Annotations:map[string]string{io.kubernetes
.container.hash: 2e11437d,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d39a4948ef1b24cdbfd1d2d292a2528188f6ee28aad6ec26d69da08481e796,PodSandboxId:982d945aaab7aa9c95c4a584b5fa02c1d47cd18be04fc9f1e3ebc910e28aea78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628888226230185005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f62af0defa89a63920e12cec2b073ed7,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7cb0306a-91f3-419c-b2d8-29f47f6e39c9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.827120720Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bfe9bdec-fd21-48b5-8b6c-aaf8ceccc7ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.827394235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bfe9bdec-fd21-48b5-8b6c-aaf8ceccc7ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.827608575Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fcfe96b47620a85746f58ab868d4a16cdaf31c42d056d7f62494af1ba62b224,PodSandboxId:5ba609500cc7c0e196e8a53f9269e643881216309054f516ca399aea55b7b24b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:14,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628888418967297625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cfac10cafa97cf114360b0b6669f20f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cf90c5cb,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8b9155767cc7fbb6cd98371277d0ca5a9d377963f7404c7555a5960998bf08,PodSandboxId:5964fa6dc4aa11e359e914ada869b2558a77ed8fa949557ddd315eb38420f87e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:14,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_EXITED,CreatedAt:1628888414743377804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f4a3beda8c56e9e73ea7dd272ae7b3,},Annotations:map[string]string{io.kubernetes
.container.hash: 2e11437d,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d39a4948ef1b24cdbfd1d2d292a2528188f6ee28aad6ec26d69da08481e796,PodSandboxId:982d945aaab7aa9c95c4a584b5fa02c1d47cd18be04fc9f1e3ebc910e28aea78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628888226230185005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f62af0defa89a63920e12cec2b073ed7,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bfe9bdec-fd21-48b5-8b6c-aaf8ceccc7ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.875665617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=46390016-03b0-43bb-9162-88f6658294a6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.875973139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=46390016-03b0-43bb-9162-88f6658294a6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.876108587Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fcfe96b47620a85746f58ab868d4a16cdaf31c42d056d7f62494af1ba62b224,PodSandboxId:5ba609500cc7c0e196e8a53f9269e643881216309054f516ca399aea55b7b24b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:14,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628888418967297625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cfac10cafa97cf114360b0b6669f20f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cf90c5cb,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8b9155767cc7fbb6cd98371277d0ca5a9d377963f7404c7555a5960998bf08,PodSandboxId:5964fa6dc4aa11e359e914ada869b2558a77ed8fa949557ddd315eb38420f87e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:14,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_EXITED,CreatedAt:1628888414743377804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f4a3beda8c56e9e73ea7dd272ae7b3,},Annotations:map[string]string{io.kubernetes
.container.hash: 2e11437d,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d39a4948ef1b24cdbfd1d2d292a2528188f6ee28aad6ec26d69da08481e796,PodSandboxId:982d945aaab7aa9c95c4a584b5fa02c1d47cd18be04fc9f1e3ebc910e28aea78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628888226230185005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f62af0defa89a63920e12cec2b073ed7,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=46390016-03b0-43bb-9162-88f6658294a6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.912475802Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=772282f9-8565-44ef-8241-4652d93c01fc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.912558310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=772282f9-8565-44ef-8241-4652d93c01fc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.912698466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fcfe96b47620a85746f58ab868d4a16cdaf31c42d056d7f62494af1ba62b224,PodSandboxId:5ba609500cc7c0e196e8a53f9269e643881216309054f516ca399aea55b7b24b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:14,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628888418967297625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cfac10cafa97cf114360b0b6669f20f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cf90c5cb,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8b9155767cc7fbb6cd98371277d0ca5a9d377963f7404c7555a5960998bf08,PodSandboxId:5964fa6dc4aa11e359e914ada869b2558a77ed8fa949557ddd315eb38420f87e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:14,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_EXITED,CreatedAt:1628888414743377804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f4a3beda8c56e9e73ea7dd272ae7b3,},Annotations:map[string]string{io.kubernetes
.container.hash: 2e11437d,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d39a4948ef1b24cdbfd1d2d292a2528188f6ee28aad6ec26d69da08481e796,PodSandboxId:982d945aaab7aa9c95c4a584b5fa02c1d47cd18be04fc9f1e3ebc910e28aea78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628888226230185005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f62af0defa89a63920e12cec2b073ed7,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=772282f9-8565-44ef-8241-4652d93c01fc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.942468759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e9b7f244-02a9-43d0-854c-cd85193aa6ba name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.942522718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e9b7f244-02a9-43d0-854c-cd85193aa6ba name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.942603486Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fcfe96b47620a85746f58ab868d4a16cdaf31c42d056d7f62494af1ba62b224,PodSandboxId:5ba609500cc7c0e196e8a53f9269e643881216309054f516ca399aea55b7b24b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:14,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628888418967297625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cfac10cafa97cf114360b0b6669f20f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cf90c5cb,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8b9155767cc7fbb6cd98371277d0ca5a9d377963f7404c7555a5960998bf08,PodSandboxId:5964fa6dc4aa11e359e914ada869b2558a77ed8fa949557ddd315eb38420f87e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:14,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_EXITED,CreatedAt:1628888414743377804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f4a3beda8c56e9e73ea7dd272ae7b3,},Annotations:map[string]string{io.kubernetes
.container.hash: 2e11437d,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d39a4948ef1b24cdbfd1d2d292a2528188f6ee28aad6ec26d69da08481e796,PodSandboxId:982d945aaab7aa9c95c4a584b5fa02c1d47cd18be04fc9f1e3ebc910e28aea78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628888226230185005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f62af0defa89a63920e12cec2b073ed7,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e9b7f244-02a9-43d0-854c-cd85193aa6ba name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.972655695Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4ac867ed-e957-4e9a-a95e-2aff612d33de name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.972712487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4ac867ed-e957-4e9a-a95e-2aff612d33de name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 crio[2045]: time="2021-08-13 21:01:00.972796733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fcfe96b47620a85746f58ab868d4a16cdaf31c42d056d7f62494af1ba62b224,PodSandboxId:5ba609500cc7c0e196e8a53f9269e643881216309054f516ca399aea55b7b24b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:14,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628888418967297625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cfac10cafa97cf114360b0b6669f20f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cf90c5cb,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8b9155767cc7fbb6cd98371277d0ca5a9d377963f7404c7555a5960998bf08,PodSandboxId:5964fa6dc4aa11e359e914ada869b2558a77ed8fa949557ddd315eb38420f87e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:14,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_EXITED,CreatedAt:1628888414743377804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f4a3beda8c56e9e73ea7dd272ae7b3,},Annotations:map[string]string{io.kubernetes
.container.hash: 2e11437d,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d39a4948ef1b24cdbfd1d2d292a2528188f6ee28aad6ec26d69da08481e796,PodSandboxId:982d945aaab7aa9c95c4a584b5fa02c1d47cd18be04fc9f1e3ebc910e28aea78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628888226230185005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f62af0defa89a63920e12cec2b073ed7,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4ac867ed-e957-4e9a-a95e-2aff612d33de name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	8fcfe96b47620       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c   42 seconds ago      Exited              kube-controller-manager   14                  5ba609500cc7c
	cc8b9155767cc       b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a   46 seconds ago      Exited              kube-apiserver            14                  5964fa6dc4aa1
	e3d39a4948ef1       7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75   3 minutes ago       Running             kube-scheduler            2                   982d945aaab7a
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Aug13 20:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.094142] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.805579] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000019] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +4.096135] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.047060] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.126626] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1735 comm=systemd-network
	[  +1.581242] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.283722] vboxguest: loading out-of-tree module taints kernel.
	[  +0.017665] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug13 20:48] systemd-fstab-generator[2140]: Ignoring "noauto" for root device
	[  +0.160299] systemd-fstab-generator[2153]: Ignoring "noauto" for root device
	[  +0.222780] systemd-fstab-generator[2179]: Ignoring "noauto" for root device
	[  +3.347224] systemd-fstab-generator[2350]: Ignoring "noauto" for root device
	[Aug13 20:50] NFSD: Unable to end grace period: -110
	[Aug13 20:52] systemd-fstab-generator[6114]: Ignoring "noauto" for root device
	[Aug13 20:56] systemd-fstab-generator[7211]: Ignoring "noauto" for root device
	
	* 
	* ==> kernel <==
	*  21:01:01 up 13 min,  0 users,  load average: 0.23, 0.37, 0.28
	Linux kubernetes-upgrade-20210813204600-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [cc8b9155767cc7fbb6cd98371277d0ca5a9d377963f7404c7555a5960998bf08] <==
	* I0813 21:00:15.194455       1 server.go:553] external host was not specified, using 192.168.50.24
	I0813 21:00:15.195548       1 server.go:161] Version: v1.22.0-rc.0
	I0813 21:00:15.514071       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
	I0813 21:00:15.517575       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0813 21:00:15.517687       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0813 21:00:15.522455       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0813 21:00:15.522540       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	W0813 21:00:15.527634       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 21:00:16.513461       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 21:00:16.529167       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 21:00:17.515123       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 21:00:18.058147       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 21:00:18.805773       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 21:00:20.696803       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 21:00:21.749036       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 21:00:25.183607       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 21:00:26.239163       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 21:00:31.697248       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 21:00:32.176344       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	Error: context deadline exceeded
	
	* 
	* ==> kube-controller-manager [8fcfe96b47620a85746f58ab868d4a16cdaf31c42d056d7f62494af1ba62b224] <==
	* k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).processNextWorkItem(0xc000a36080, 0x203000)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:263 +0x66
	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).runWorker(...)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:258
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000359b30)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000359b30, 0x5175ae0, 0xc0008bb920, 0x4c62101, 0xc000100360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000359b30, 0x3b9aca00, 0x0, 0x1, 0xc000100360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000359b30, 0x3b9aca00, 0xc000100360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:247 +0x1d2
	
	goroutine 147 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000359b50, 0x5175ae0, 0xc0008bb8f0, 0x4c62101, 0xc000100360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000359b50, 0xdf8475800, 0x0, 0x1, 0xc000100360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000359b50, 0xdf8475800, 0xc000100360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:250 +0x24b
	
	* 
	* ==> kube-scheduler [e3d39a4948ef1b24cdbfd1d2d292a2528188f6ee28aad6ec26d69da08481e796] <==
	* E0813 21:00:03.573658       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.50.24:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.24:8443: connect: connection refused
	E0813 21:00:10.230778       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.50.24:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.50.24:8443: connect: connection refused
	E0813 21:00:10.362078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.50.24:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.50.24:8443: connect: connection refused
	E0813 21:00:12.221356       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.50.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 21:00:26.940266       1 trace.go:205] Trace[1327767759]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:00:16.937) (total time: 10002ms):
	Trace[1327767759]: [10.002433511s] [10.002433511s] END
	E0813 21:00:26.940339       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.50.24:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0813 21:00:28.063285       1 trace.go:205] Trace[1454726231]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:00:18.061) (total time: 10002ms):
	Trace[1454726231]: [10.002067091s] [10.002067091s] END
	E0813 21:00:28.063437       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0813 21:00:30.115162       1 trace.go:205] Trace[810084935]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:00:20.112) (total time: 10002ms):
	Trace[810084935]: [10.002125067s] [10.002125067s] END
	E0813 21:00:30.115322       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.50.24:8443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0813 21:00:33.940288       1 trace.go:205] Trace[461327717]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:00:23.938) (total time: 10001ms):
	Trace[461327717]: [10.001585201s] [10.001585201s] END
	E0813 21:00:33.940324       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.50.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0813 21:00:36.538337       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.50.24:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.50.24:8443: connect: connection refused
	E0813 21:00:36.538715       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get "https://192.168.50.24:8443/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.50.24:8443: connect: connection refused
	E0813 21:00:36.539654       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.50.24:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.24:8443: connect: connection refused
	E0813 21:00:36.922245       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.50.24:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.24:8443: connect: connection refused
	E0813 21:00:41.670745       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.50.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.50.24:8443: connect: connection refused
	E0813 21:00:42.055379       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.50.24:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.24:8443: connect: connection refused
	E0813 21:00:51.071789       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.50.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.50.24:8443: connect: connection refused
	E0813 21:00:51.098071       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.50.24:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.50.24:8443: connect: connection refused
	E0813 21:00:53.852423       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.50.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.24:8443: connect: connection refused
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:47:56 UTC, end at Fri 2021-08-13 21:01:01 UTC. --
	Aug 13 21:00:59 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:00:59.070234    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:00:59 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:00:59.170801    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:00:59 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:00:59.271572    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:00:59 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:00:59.372972    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:00:59 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:00:59.473747    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:00:59 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:00:59.578339    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:00:59 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:00:59.678576    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:00:59 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:00:59.779938    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:00:59 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:00:59.880710    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:00:59 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: I0813 21:00:59.971494    7219 kubelet_node_status.go:71] "Attempting to register node" node="kubernetes-upgrade-20210813204600-30853"
	Aug 13 21:00:59 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:00:59.972282    7219 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.24:8443: connect: connection refused" node="kubernetes-upgrade-20210813204600-30853"
	Aug 13 21:00:59 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:00:59.982024    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:01:00.082114    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:01:00.185159    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:01:00.286045    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:01:00.386936    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:01:00.487926    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:01:00.588210    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:01:00.688524    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:01:00.789484    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:01:00.890594    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:01:00 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:01:00.990891    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:01:01 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:01:01.091336    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:01:01 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:01:01.192349    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	Aug 13 21:01:01 kubernetes-upgrade-20210813204600-30853 kubelet[7219]: E0813 21:01:01.293097    7219 kubelet.go:2407] "Error getting node" err="node \"kubernetes-upgrade-20210813204600-30853\" not found"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 21:01:01.133728   10249 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210813204600-30853" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210813204600-30853
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210813204600-30853: (1.180094405s)
--- FAIL: TestKubernetesUpgrade (902.33s)

                                                
                                    
x
+
TestPause/serial/Pause (6.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210813204600-30853 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210813204600-30853 --alsologtostderr -v=5: exit status 80 (2.527965451s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210813204600-30853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:49:13.380664    3496 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:49:13.384356    3496 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:13.384374    3496 out.go:311] Setting ErrFile to fd 2...
	I0813 20:49:13.384379    3496 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:13.384527    3496 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:49:13.384759    3496 out.go:305] Setting JSON to false
	I0813 20:49:13.384789    3496 mustload.go:65] Loading cluster: pause-20210813204600-30853
	I0813 20:49:13.385202    3496 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:13.385761    3496 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:13.385814    3496 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:13.397335    3496 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0813 20:49:13.397798    3496 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:13.398434    3496 main.go:130] libmachine: Using API Version  1
	I0813 20:49:13.398457    3496 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:13.398786    3496 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:13.398958    3496 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:13.402347    3496 host.go:66] Checking if "pause-20210813204600-30853" exists ...
	I0813 20:49:13.402658    3496 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:13.402702    3496 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:13.414026    3496 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43171
	I0813 20:49:13.414424    3496 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:13.414826    3496 main.go:130] libmachine: Using API Version  1
	I0813 20:49:13.414857    3496 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:13.415174    3496 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:13.415352    3496 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:13.416044    3496 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210813204600-30853 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:49:13.418522    3496 out.go:177] * Pausing node pause-20210813204600-30853 ... 
	I0813 20:49:13.418544    3496 host.go:66] Checking if "pause-20210813204600-30853" exists ...
	I0813 20:49:13.418845    3496 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:13.418897    3496 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:13.429027    3496 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40733
	I0813 20:49:13.429441    3496 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:13.429861    3496 main.go:130] libmachine: Using API Version  1
	I0813 20:49:13.429886    3496 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:13.430273    3496 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:13.430440    3496 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:13.430627    3496 ssh_runner.go:149] Run: systemctl --version
	I0813 20:49:13.430648    3496 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:13.436390    3496 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:13.436815    3496 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:13.436851    3496 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:13.436970    3496 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:13.437133    3496 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:13.437259    3496 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:13.437369    3496 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:13.557379    3496 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:13.567673    3496 pause.go:50] kubelet running: true
	I0813 20:49:13.567730    3496 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:49:13.846481    3496 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:49:13.846595    3496 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:49:13.986651    3496 cri.go:76] found id: "10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5"
	I0813 20:49:13.986686    3496 cri.go:76] found id: "d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6"
	I0813 20:49:13.986694    3496 cri.go:76] found id: "2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164"
	I0813 20:49:13.986701    3496 cri.go:76] found id: "ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf"
	I0813 20:49:13.986707    3496 cri.go:76] found id: "66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf"
	I0813 20:49:13.986713    3496 cri.go:76] found id: "83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659"
	I0813 20:49:13.986719    3496 cri.go:76] found id: "82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b"
	I0813 20:49:13.986726    3496 cri.go:76] found id: ""
	I0813 20:49:13.986781    3496 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:49:14.035521    3496 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5","pid":4257,"status":"running","bundle":"/run/containers/storage/overlay-containers/10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5/userdata","rootfs":"/var/lib/containers/storage/overlay/a7079e689c0a4e4d71832ec264022bf461f0ce8ad4ce2b3108ed136791be2f03/merged","created":"2021-08-13T20:49:13.000895625Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"739bee08","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"739bee08\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.te
rminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:49:12.875097095Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provi
sioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a7079e689c0a4e4d71832ec264022bf461f0ce8ad4ce2b3108ed136791be2f03/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/
etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/containers/storage-provisioner/3a59d7be\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/volumes/kubernetes.io~projected/kube-api-access-8s2qn\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-
minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:49:12.082823120Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","pid":4225,"status":"running","bundle":"/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata","rootfs":"/var/lib/containers/storage/overlay/b371eb6a701d211019f02265e2b7e86f1082a1d6de3736aec82972dd30ae9cc7/merged","created":"2021-08-13T20:49:12.571539079Z","annotations":{"addonmanager.kubernetes.io/mode"
:"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:49:12.082823120Z\",\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"v
olumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-podaa2d90d0_7a2e_40cf_b9ac_81fb9e2c1e76.slice","io.kubernetes.cri-o.ContainerID":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:49:12.45648335Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.p
od.name\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\",\"io.kubernetes.pod.uid\":\"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b371eb6a701d211019f02265e2b7e86f1082a1d6de3736aec82972dd30ae9cc7/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kub
ernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provis
ioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:49:12.082823120Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","pid":3260,"status":"running","bundle":"/run/containers/storage/overlay-containers/2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164/userdata","rootfs":"/var/lib/containers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","created":"2021-08-13T20:48:25.650799846Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.ha
sh":"7bfe6d1f","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7bfe6d1f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.433420822Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","i
o.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-4n8kb_kube-system_7c8a
1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/containers/kube-proxy/b214a802\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f
97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~projected/kube-api-access-qrwsr\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","pid":2560,"status":"running","bundle":"/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata","rootfs":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","created":"2021-08-13T20:47:58.451921584Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o",
"io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170563888Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podace00bb4fb8a8a9569ff7dae47e01d30.slice","io.kubernetes.cri-o.ContainerID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.734913609Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes
.cri-o.KubeName":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30853_ace00bb4fb8a8a9569ff7dae47e01d30/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210813204600-30853\",\"uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernete
s.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/con
fig.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","pid":3063,"status":"running","bundle":"/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata","rootfs":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe937ca4df37/merged","created":"2021-08-13T20:48:24.164151322Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.030706859Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod7c8a1bad_1f97_44ad_a3e3_fb9d52cfd0d9.slice","io.kubernetes.cri-o.ContainerID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-
o.ContainerName":"k8s_POD_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.009794742Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-4n8kb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"7cdcb64568\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d
9/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-4n8kb\",\"uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe937ca4df37/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default
","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/shm","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","pid":2712,"status":"running","bundle":"/run/containers/storage/overlay-containers/66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf/userdata","rootfs":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","created":"2021-08-13T20:48:00.371988051Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.
hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.05184871Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf24544
28a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30853_ace00bb4fb8a8a9569ff7dae47e01d30/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b2380
25a67ffbc7ea","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/containers/kube-scheduler/1a90a935\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb
8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","pid":2531,"status":"running","bundle":"/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata","rootfs":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","created":"2021-08-13T20:47:58.134632094Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b586eaff819d4c98a938914befbf359d\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170560054Z\"}","io.kubernetes.cri-o
.CgroupParent":"kubepods-burstable-podb586eaff819d4c98a938914befbf359d.slice","io.kubernetes.cri-o.ContainerID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.58849323Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.k
ubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210813204600-30853\",\"uid\":\"b586eaff819d4c98a938914befbf359d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-
o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad436
0b2447402da7e271","pid":3202,"status":"running","bundle":"/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata","rootfs":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","created":"2021-08-13T20:48:25.02088557Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.684666458Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth769c0295\",\"mac\":\"0e:7f:8d:fd:2a:c5\"},{\"name\":\"eth0\",\"mac\":\"46:39:40:9e:ad:d7\",\"sandbox\":\"/var/run/netns/70e99836-e661-4e4f-bfb4-1e8d94b25ad2\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1
\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod72033717_35d7_4397_b3c5_28028e7270f3.slice","io.kubernetes.cri-o.ContainerID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.356545063Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4g
rvm\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-4grvm\",\"uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b24
47402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","pid":2593,"status":"running","bundle":"/run/containers/storage/overlay-containers/82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b/use
rdata","rootfs":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","created":"2021-08-13T20:47:59.106710832Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"46519583","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"46519583\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:58.700311118Z","io.kubernete
s.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubern
etes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/containers/kube-apiserver/d05226bf\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\"
,\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","pid":2654,"status":"running","bundle":"/run/containers/storage/overlay-containers/83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659/us
erdata","rootfs":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","created":"2021-08-13T20:47:59.879440634Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dfe11a","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dfe11a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:59.302380713Z","io.kub
ernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-2
0210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/containers/kube-controller-manager/3fd07eff\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/control
ler-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed
'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata","rootfs":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","created":"2021-08-13T20:48:24.985669139Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.664842879Z\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth8015c076\",\"mac\":\"b6:65:b6:ec:41:c5\"},{\"name\":\"eth0\",\"mac\":\"e2:c2:94:2c:86:54\",\"sandbox\":\"/var/run/netns/18863c2e-48ba-4850-8146-8e155524b6dd\"}],\"
ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.3/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod21759cc2_1fdb_417f_bc71_01fb6f9d0c35.slice","io.kubernetes.cri-o.ContainerID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-417f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.319998358Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.Labels":"{\"io.kuber
netes.pod.uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-gm2bv\",\"k8s-app\":\"kube-dns\",\"pod-template-hash\":\"558bd4d5db\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-gm2bv_21759cc2-1fdb-417f-bc71-01fb6f9d0c35/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-gm2bv\",\"uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-417f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.Privi
legedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-gm2bv","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"21759cc2-1fdb-417f-bc71-01fb6f9d0c35","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.664842879Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36d
cf","pid":2754,"status":"running","bundle":"/run/containers/storage/overlay-containers/ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf/userdata","rootfs":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","created":"2021-08-13T20:48:00.893103098Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5d26fc81","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5d26fc81\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2b
a567752668c63d36dcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.424653769Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-2021081320460
0-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/containers/etcd/7df814d9\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"reado
nly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d22eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","pid":3295,"status":"running","bundle":"/run/containers/storage/overlay-containers/d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6/userdata","rootfs":"/var/lib/containers/storage/overlay/6c5d
d04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","created":"2021-08-13T20:48:25.853932123Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"861ab352","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"861ab352\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.k
ubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.56946163Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grvm\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\"}","io.kubernetes.cri-o.LogPath":
"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6c5dd04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\
",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/containers/coredns/baf35c8d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~projected/kube-api-access-zsj85\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inact
ive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","pid":2552,"status":"running","bundle":"/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata","rootfs":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","created":"2021-08-13T20:47:58.569818878Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.61:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170566946Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod545d21e989d5ed3752d22eeb8bd8ffce.slice","io.kubernetes.cri-o.
ContainerID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.638411495Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"etcd\"}","io.kubernetes.cri-o.LogPath
":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210813204600-30853\",\"uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e99200313
30011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d22eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","pid":2497,"status":"running","bundle":"/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/user
data","rootfs":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","created":"2021-08-13T20:47:57.759478731Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170508472Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"cb76671b6b79a1d552449a94a3dbfa98\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.61:8443\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice","io.kubernetes.cri-o.ContainerID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.128395566Z"
,"io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210813204600-30853\",\"uid\":\"cb76671b6b7
9a1d552449a94a3dbfa98\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48b
a3029/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0813 20:49:14.037585    3496 cri.go:113] list returned 15 containers
	I0813 20:49:14.037616    3496 cri.go:116] container: {ID:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5 Status:running}
	I0813 20:49:14.037630    3496 cri.go:116] container: {ID:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e Status:running}
	I0813 20:49:14.037641    3496 cri.go:118] skipping 2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e - not in ps
	I0813 20:49:14.037647    3496 cri.go:116] container: {ID:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 Status:running}
	I0813 20:49:14.037653    3496 cri.go:116] container: {ID:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea Status:running}
	I0813 20:49:14.037660    3496 cri.go:118] skipping 55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea - not in ps
	I0813 20:49:14.037665    3496 cri.go:116] container: {ID:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 Status:running}
	I0813 20:49:14.037672    3496 cri.go:118] skipping 564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 - not in ps
	I0813 20:49:14.037679    3496 cri.go:116] container: {ID:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf Status:running}
	I0813 20:49:14.037685    3496 cri.go:116] container: {ID:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 Status:running}
	I0813 20:49:14.037700    3496 cri.go:118] skipping 6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 - not in ps
	I0813 20:49:14.037705    3496 cri.go:116] container: {ID:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 Status:running}
	I0813 20:49:14.037712    3496 cri.go:118] skipping 8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 - not in ps
	I0813 20:49:14.037718    3496 cri.go:116] container: {ID:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b Status:running}
	I0813 20:49:14.037724    3496 cri.go:116] container: {ID:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659 Status:running}
	I0813 20:49:14.037733    3496 cri.go:116] container: {ID:9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 Status:stopped}
	I0813 20:49:14.037740    3496 cri.go:118] skipping 9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 - not in ps
	I0813 20:49:14.037746    3496 cri.go:116] container: {ID:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf Status:running}
	I0813 20:49:14.037752    3496 cri.go:116] container: {ID:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6 Status:running}
	I0813 20:49:14.037758    3496 cri.go:116] container: {ID:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f Status:running}
	I0813 20:49:14.037766    3496 cri.go:118] skipping e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f - not in ps
	I0813 20:49:14.037770    3496 cri.go:116] container: {ID:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 Status:running}
	I0813 20:49:14.037776    3496 cri.go:118] skipping f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 - not in ps
	I0813 20:49:14.037824    3496 ssh_runner.go:149] Run: sudo runc pause 10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5
	I0813 20:49:14.060977    3496 ssh_runner.go:149] Run: sudo runc pause 10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5 2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164
	I0813 20:49:14.083438    3496 retry.go:31] will retry after 276.165072ms: runc: sudo runc pause 10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5 2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:49:14Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:49:14.360816    3496 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:14.376899    3496 pause.go:50] kubelet running: false
	I0813 20:49:14.376972    3496 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:49:14.596789    3496 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:49:14.596874    3496 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:49:14.748600    3496 cri.go:76] found id: "10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5"
	I0813 20:49:14.748630    3496 cri.go:76] found id: "d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6"
	I0813 20:49:14.748635    3496 cri.go:76] found id: "2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164"
	I0813 20:49:14.748639    3496 cri.go:76] found id: "ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf"
	I0813 20:49:14.748643    3496 cri.go:76] found id: "66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf"
	I0813 20:49:14.748646    3496 cri.go:76] found id: "83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659"
	I0813 20:49:14.748650    3496 cri.go:76] found id: "82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b"
	I0813 20:49:14.748653    3496 cri.go:76] found id: ""
	I0813 20:49:14.748694    3496 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:49:14.801037    3496 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5","pid":4257,"status":"paused","bundle":"/run/containers/storage/overlay-containers/10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5/userdata","rootfs":"/var/lib/containers/storage/overlay/a7079e689c0a4e4d71832ec264022bf461f0ce8ad4ce2b3108ed136791be2f03/merged","created":"2021-08-13T20:49:13.000895625Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"739bee08","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"739bee08\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.ter
minationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:49:12.875097095Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provis
ioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a7079e689c0a4e4d71832ec264022bf461f0ce8ad4ce2b3108ed136791be2f03/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/e
tc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/containers/storage-provisioner/3a59d7be\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/volumes/kubernetes.io~projected/kube-api-access-8s2qn\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:49:12.082823120Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","pid":4225,"status":"running","bundle":"/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata","rootfs":"/var/lib/containers/storage/overlay/b371eb6a701d211019f02265e2b7e86f1082a1d6de3736aec82972dd30ae9cc7/merged","created":"2021-08-13T20:49:12.571539079Z","annotations":{"addonmanager.kubernetes.io/mode":
"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:49:12.082823120Z\",\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"vo
lumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-podaa2d90d0_7a2e_40cf_b9ac_81fb9e2c1e76.slice","io.kubernetes.cri-o.ContainerID":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:49:12.45648335Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.po
d.name\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\",\"io.kubernetes.pod.uid\":\"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b371eb6a701d211019f02265e2b7e86f1082a1d6de3736aec82972dd30ae9cc7/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kube
rnetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisi
oner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:49:12.082823120Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","pid":3260,"status":"running","bundle":"/run/containers/storage/overlay-containers/2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164/userdata","rootfs":"/var/lib/containers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","created":"2021-08-13T20:48:25.650799846Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.has
h":"7bfe6d1f","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7bfe6d1f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.433420822Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io
.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-4n8kb_kube-system_7c8a1
bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/containers/kube-proxy/b214a802\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f9
7-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~projected/kube-api-access-qrwsr\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","pid":2560,"status":"running","bundle":"/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata","rootfs":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","created":"2021-08-13T20:47:58.451921584Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","
io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170563888Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podace00bb4fb8a8a9569ff7dae47e01d30.slice","io.kubernetes.cri-o.ContainerID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.734913609Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.
cri-o.KubeName":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30853_ace00bb4fb8a8a9569ff7dae47e01d30/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210813204600-30853\",\"uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes
.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/conf
ig.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","pid":3063,"status":"running","bundle":"/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata","rootfs":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe937ca4df37/merged","created":"2021-08-13T20:48:24.164151322Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.030706859Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod7c8a1bad_1f97_44ad_a3e3_fb9d52cfd0d9.slice","io.kubernetes.cri-o.ContainerID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o
.ContainerName":"k8s_POD_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.009794742Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-4n8kb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"7cdcb64568\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9
/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-4n8kb\",\"uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe937ca4df37/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default"
,"io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/shm","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","pid":2712,"status":"running","bundle":"/run/containers/storage/overlay-containers/66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf/userdata","rootfs":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","created":"2021-08-13T20:48:00.371988051Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.h
ash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.05184871Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf245442
8a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30853_ace00bb4fb8a8a9569ff7dae47e01d30/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b23802
5a67ffbc7ea","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/containers/kube-scheduler/1a90a935\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8
a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","pid":2531,"status":"running","bundle":"/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata","rootfs":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","created":"2021-08-13T20:47:58.134632094Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b586eaff819d4c98a938914befbf359d\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170560054Z\"}","io.kubernetes.cri-o.
CgroupParent":"kubepods-burstable-podb586eaff819d4c98a938914befbf359d.slice","io.kubernetes.cri-o.ContainerID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.58849323Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.ku
bernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210813204600-30853\",\"uid\":\"b586eaff819d4c98a938914befbf359d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o
.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360
b2447402da7e271","pid":3202,"status":"running","bundle":"/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata","rootfs":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","created":"2021-08-13T20:48:25.02088557Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.684666458Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth769c0295\",\"mac\":\"0e:7f:8d:fd:2a:c5\"},{\"name\":\"eth0\",\"mac\":\"46:39:40:9e:ad:d7\",\"sandbox\":\"/var/run/netns/70e99836-e661-4e4f-bfb4-1e8d94b25ad2\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\
"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod72033717_35d7_4397_b3c5_28028e7270f3.slice","io.kubernetes.cri-o.ContainerID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.356545063Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4gr
vm\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-4grvm\",\"uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b244
7402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","pid":2593,"status":"running","bundle":"/run/containers/storage/overlay-containers/82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b/user
data","rootfs":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","created":"2021-08-13T20:47:59.106710832Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"46519583","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"46519583\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:58.700311118Z","io.kubernetes
.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kuberne
tes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/containers/kube-apiserver/d05226bf\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",
\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","pid":2654,"status":"running","bundle":"/run/containers/storage/overlay-containers/83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659/use
rdata","rootfs":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","created":"2021-08-13T20:47:59.879440634Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dfe11a","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dfe11a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:59.302380713Z","io.kube
rnetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20
210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/containers/kube-controller-manager/3fd07eff\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controll
er-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'
","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata","rootfs":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","created":"2021-08-13T20:48:24.985669139Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.664842879Z\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth8015c076\",\"mac\":\"b6:65:b6:ec:41:c5\"},{\"name\":\"eth0\",\"mac\":\"e2:c2:94:2c:86:54\",\"sandbox\":\"/var/run/netns/18863c2e-48ba-4850-8146-8e155524b6dd\"}],\"i
ps\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.3/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod21759cc2_1fdb_417f_bc71_01fb6f9d0c35.slice","io.kubernetes.cri-o.ContainerID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-417f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.319998358Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.Labels":"{\"io.kubern
etes.pod.uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-gm2bv\",\"k8s-app\":\"kube-dns\",\"pod-template-hash\":\"558bd4d5db\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-gm2bv_21759cc2-1fdb-417f-bc71-01fb6f9d0c35/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-gm2bv\",\"uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-417f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.Privil
egedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-gm2bv","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"21759cc2-1fdb-417f-bc71-01fb6f9d0c35","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.664842879Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dc
f","pid":2754,"status":"running","bundle":"/run/containers/storage/overlay-containers/ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf/userdata","rootfs":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","created":"2021-08-13T20:48:00.893103098Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5d26fc81","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5d26fc81\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba
567752668c63d36dcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.424653769Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210813204600
-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/containers/etcd/7df814d9\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readon
ly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d22eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","pid":3295,"status":"running","bundle":"/run/containers/storage/overlay-containers/d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6/userdata","rootfs":"/var/lib/containers/storage/overlay/6c5dd
04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","created":"2021-08-13T20:48:25.853932123Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"861ab352","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"861ab352\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.ku
bernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.56946163Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grvm\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\"}","io.kubernetes.cri-o.LogPath":"
/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6c5dd04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\"
,\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/containers/coredns/baf35c8d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~projected/kube-api-access-zsj85\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inacti
ve-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","pid":2552,"status":"running","bundle":"/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata","rootfs":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","created":"2021-08-13T20:47:58.569818878Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.61:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170566946Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod545d21e989d5ed3752d22eeb8bd8ffce.slice","io.kubernetes.cri-o.C
ontainerID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.638411495Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"etcd\"}","io.kubernetes.cri-o.LogPath"
:"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210813204600-30853\",\"uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e992003133
0011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d22eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","pid":2497,"status":"running","bundle":"/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userd
ata","rootfs":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","created":"2021-08-13T20:47:57.759478731Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170508472Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"cb76671b6b79a1d552449a94a3dbfa98\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.61:8443\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice","io.kubernetes.cri-o.ContainerID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.128395566Z",
"io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210813204600-30853\",\"uid\":\"cb76671b6b79
a1d552449a94a3dbfa98\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba
3029/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0813 20:49:14.801998    3496 cri.go:113] list returned 15 containers
	I0813 20:49:14.802022    3496 cri.go:116] container: {ID:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5 Status:paused}
	I0813 20:49:14.802037    3496 cri.go:122] skipping {10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5 paused}: state = "paused", want "running"
	I0813 20:49:14.802052    3496 cri.go:116] container: {ID:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e Status:running}
	I0813 20:49:14.802060    3496 cri.go:118] skipping 2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e - not in ps
	I0813 20:49:14.802066    3496 cri.go:116] container: {ID:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 Status:running}
	I0813 20:49:14.802073    3496 cri.go:116] container: {ID:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea Status:running}
	I0813 20:49:14.802082    3496 cri.go:118] skipping 55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea - not in ps
	I0813 20:49:14.802088    3496 cri.go:116] container: {ID:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 Status:running}
	I0813 20:49:14.802103    3496 cri.go:118] skipping 564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 - not in ps
	I0813 20:49:14.802109    3496 cri.go:116] container: {ID:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf Status:running}
	I0813 20:49:14.802118    3496 cri.go:116] container: {ID:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 Status:running}
	I0813 20:49:14.802126    3496 cri.go:118] skipping 6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 - not in ps
	I0813 20:49:14.802139    3496 cri.go:116] container: {ID:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 Status:running}
	I0813 20:49:14.802146    3496 cri.go:118] skipping 8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 - not in ps
	I0813 20:49:14.802155    3496 cri.go:116] container: {ID:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b Status:running}
	I0813 20:49:14.802162    3496 cri.go:116] container: {ID:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659 Status:running}
	I0813 20:49:14.802173    3496 cri.go:116] container: {ID:9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 Status:stopped}
	I0813 20:49:14.802181    3496 cri.go:118] skipping 9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 - not in ps
	I0813 20:49:14.802190    3496 cri.go:116] container: {ID:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf Status:running}
	I0813 20:49:14.802196    3496 cri.go:116] container: {ID:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6 Status:running}
	I0813 20:49:14.802205    3496 cri.go:116] container: {ID:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f Status:running}
	I0813 20:49:14.802212    3496 cri.go:118] skipping e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f - not in ps
	I0813 20:49:14.802223    3496 cri.go:116] container: {ID:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 Status:running}
	I0813 20:49:14.802232    3496 cri.go:118] skipping f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 - not in ps
	I0813 20:49:14.802279    3496 ssh_runner.go:149] Run: sudo runc pause 2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164
	I0813 20:49:14.835562    3496 ssh_runner.go:149] Run: sudo runc pause 2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf
	I0813 20:49:14.855253    3496 retry.go:31] will retry after 540.190908ms: runc: sudo runc pause 2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:49:14Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:49:15.395575    3496 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:15.409399    3496 pause.go:50] kubelet running: false
	I0813 20:49:15.409488    3496 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:49:15.616353    3496 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:49:15.616451    3496 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:49:15.739822    3496 cri.go:76] found id: "10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5"
	I0813 20:49:15.739858    3496 cri.go:76] found id: "d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6"
	I0813 20:49:15.739864    3496 cri.go:76] found id: "2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164"
	I0813 20:49:15.739868    3496 cri.go:76] found id: "ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf"
	I0813 20:49:15.739872    3496 cri.go:76] found id: "66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf"
	I0813 20:49:15.739875    3496 cri.go:76] found id: "83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659"
	I0813 20:49:15.739879    3496 cri.go:76] found id: "82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b"
	I0813 20:49:15.739885    3496 cri.go:76] found id: ""
	I0813 20:49:15.739939    3496 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:49:15.787488    3496 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5","pid":4257,"status":"paused","bundle":"/run/containers/storage/overlay-containers/10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5/userdata","rootfs":"/var/lib/containers/storage/overlay/a7079e689c0a4e4d71832ec264022bf461f0ce8ad4ce2b3108ed136791be2f03/merged","created":"2021-08-13T20:49:13.000895625Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"739bee08","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"739bee08\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.ter
minationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:49:12.875097095Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provis
ioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a7079e689c0a4e4d71832ec264022bf461f0ce8ad4ce2b3108ed136791be2f03/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/e
tc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/containers/storage-provisioner/3a59d7be\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/volumes/kubernetes.io~projected/kube-api-access-8s2qn\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:49:12.082823120Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","pid":4225,"status":"running","bundle":"/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata","rootfs":"/var/lib/containers/storage/overlay/b371eb6a701d211019f02265e2b7e86f1082a1d6de3736aec82972dd30ae9cc7/merged","created":"2021-08-13T20:49:12.571539079Z","annotations":{"addonmanager.kubernetes.io/mode":
"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:49:12.082823120Z\",\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"vo
lumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-podaa2d90d0_7a2e_40cf_b9ac_81fb9e2c1e76.slice","io.kubernetes.cri-o.ContainerID":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:49:12.45648335Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.po
d.name\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\",\"io.kubernetes.pod.uid\":\"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b371eb6a701d211019f02265e2b7e86f1082a1d6de3736aec82972dd30ae9cc7/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kube
rnetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisi
oner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:49:12.082823120Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","pid":3260,"status":"paused","bundle":"/run/containers/storage/overlay-containers/2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164/userdata","rootfs":"/var/lib/containers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","created":"2021-08-13T20:48:25.650799846Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash
":"7bfe6d1f","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7bfe6d1f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.433420822Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.
kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-4n8kb_kube-system_7c8a1b
ad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/containers/kube-proxy/b214a802\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97
-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~projected/kube-api-access-qrwsr\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","pid":2560,"status":"running","bundle":"/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata","rootfs":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","created":"2021-08-13T20:47:58.451921584Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","i
o.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170563888Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podace00bb4fb8a8a9569ff7dae47e01d30.slice","io.kubernetes.cri-o.ContainerID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.734913609Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.c
ri-o.KubeName":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30853_ace00bb4fb8a8a9569ff7dae47e01d30/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210813204600-30853\",\"uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.
cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/confi
g.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","pid":3063,"status":"running","bundle":"/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata","rootfs":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe937ca4df37/merged","created":"2021-08-13T20:48:24.164151322Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.030706859Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod7c8a1bad_1f97_44ad_a3e3_fb9d52cfd0d9.slice","io.kubernetes.cri-o.ContainerID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.
ContainerName":"k8s_POD_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.009794742Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-4n8kb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"7cdcb64568\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/
564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-4n8kb\",\"uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe937ca4df37/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default",
"io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/shm","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","pid":2712,"status":"running","bundle":"/run/containers/storage/overlay-containers/66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf/userdata","rootfs":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","created":"2021-08-13T20:48:00.371988051Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.ha
sh":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.05184871Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428
a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30853_ace00bb4fb8a8a9569ff7dae47e01d30/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025
a67ffbc7ea","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/containers/kube-scheduler/1a90a935\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a
8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","pid":2531,"status":"running","bundle":"/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata","rootfs":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","created":"2021-08-13T20:47:58.134632094Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b586eaff819d4c98a938914befbf359d\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170560054Z\"}","io.kubernetes.cri-o.C
groupParent":"kubepods-burstable-podb586eaff819d4c98a938914befbf359d.slice","io.kubernetes.cri-o.ContainerID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.58849323Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kub
ernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210813204600-30853\",\"uid\":\"b586eaff819d4c98a938914befbf359d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.
ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b
2447402da7e271","pid":3202,"status":"running","bundle":"/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata","rootfs":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","created":"2021-08-13T20:48:25.02088557Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.684666458Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth769c0295\",\"mac\":\"0e:7f:8d:fd:2a:c5\"},{\"name\":\"eth0\",\"mac\":\"46:39:40:9e:ad:d7\",\"sandbox\":\"/var/run/netns/70e99836-e661-4e4f-bfb4-1e8d94b25ad2\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"
}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod72033717_35d7_4397_b3c5_28028e7270f3.slice","io.kubernetes.cri-o.ContainerID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.356545063Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grv
m\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-4grvm\",\"uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447
402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","pid":2593,"status":"running","bundle":"/run/containers/storage/overlay-containers/82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b/userd
ata","rootfs":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","created":"2021-08-13T20:47:59.106710832Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"46519583","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"46519583\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:58.700311118Z","io.kubernetes.
cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernet
es.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/containers/kube-apiserver/d05226bf\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\
"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","pid":2654,"status":"running","bundle":"/run/containers/storage/overlay-containers/83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659/user
data","rootfs":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","created":"2021-08-13T20:47:59.879440634Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dfe11a","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dfe11a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:59.302380713Z","io.kuber
netes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-202
10813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/containers/kube-controller-manager/3fd07eff\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controlle
r-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'"
,"org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata","rootfs":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","created":"2021-08-13T20:48:24.985669139Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.664842879Z\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth8015c076\",\"mac\":\"b6:65:b6:ec:41:c5\"},{\"name\":\"eth0\",\"mac\":\"e2:c2:94:2c:86:54\",\"sandbox\":\"/var/run/netns/18863c2e-48ba-4850-8146-8e155524b6dd\"}],\"ip
s\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.3/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod21759cc2_1fdb_417f_bc71_01fb6f9d0c35.slice","io.kubernetes.cri-o.ContainerID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-417f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.319998358Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.Labels":"{\"io.kuberne
tes.pod.uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-gm2bv\",\"k8s-app\":\"kube-dns\",\"pod-template-hash\":\"558bd4d5db\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-gm2bv_21759cc2-1fdb-417f-bc71-01fb6f9d0c35/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-gm2bv\",\"uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-417f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.Privile
gedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-gm2bv","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"21759cc2-1fdb-417f-bc71-01fb6f9d0c35","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.664842879Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf
","pid":2754,"status":"running","bundle":"/run/containers/storage/overlay-containers/ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf/userdata","rootfs":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","created":"2021-08-13T20:48:00.893103098Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5d26fc81","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5d26fc81\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba5
67752668c63d36dcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.424653769Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210813204600-
30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/containers/etcd/7df814d9\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonl
y\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d22eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","pid":3295,"status":"running","bundle":"/run/containers/storage/overlay-containers/d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6/userdata","rootfs":"/var/lib/containers/storage/overlay/6c5dd0
4ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","created":"2021-08-13T20:48:25.853932123Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"861ab352","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"861ab352\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kub
ernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.56946163Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grvm\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\"}","io.kubernetes.cri-o.LogPath":"/
var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6c5dd04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",
\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/containers/coredns/baf35c8d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~projected/kube-api-access-zsj85\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactiv
e-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","pid":2552,"status":"running","bundle":"/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata","rootfs":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","created":"2021-08-13T20:47:58.569818878Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.61:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170566946Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod545d21e989d5ed3752d22eeb8bd8ffce.slice","io.kubernetes.cri-o.Co
ntainerID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.638411495Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"etcd\"}","io.kubernetes.cri-o.LogPath":
"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210813204600-30853\",\"uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e9920031330
011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d22eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","pid":2497,"status":"running","bundle":"/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userda
ta","rootfs":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","created":"2021-08-13T20:47:57.759478731Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170508472Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"cb76671b6b79a1d552449a94a3dbfa98\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.61:8443\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice","io.kubernetes.cri-o.ContainerID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.128395566Z","
io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210813204600-30853\",\"uid\":\"cb76671b6b79a
1d552449a94a3dbfa98\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3
029/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0813 20:49:15.788480    3496 cri.go:113] list returned 15 containers
	I0813 20:49:15.788500    3496 cri.go:116] container: {ID:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5 Status:paused}
	I0813 20:49:15.788519    3496 cri.go:122] skipping {10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5 paused}: state = "paused", want "running"
	I0813 20:49:15.788537    3496 cri.go:116] container: {ID:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e Status:running}
	I0813 20:49:15.788544    3496 cri.go:118] skipping 2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e - not in ps
	I0813 20:49:15.788552    3496 cri.go:116] container: {ID:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 Status:paused}
	I0813 20:49:15.788558    3496 cri.go:122] skipping {2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 paused}: state = "paused", want "running"
	I0813 20:49:15.788568    3496 cri.go:116] container: {ID:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea Status:running}
	I0813 20:49:15.788575    3496 cri.go:118] skipping 55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea - not in ps
	I0813 20:49:15.788584    3496 cri.go:116] container: {ID:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 Status:running}
	I0813 20:49:15.788592    3496 cri.go:118] skipping 564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 - not in ps
	I0813 20:49:15.788601    3496 cri.go:116] container: {ID:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf Status:running}
	I0813 20:49:15.788611    3496 cri.go:116] container: {ID:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 Status:running}
	I0813 20:49:15.788617    3496 cri.go:118] skipping 6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 - not in ps
	I0813 20:49:15.788623    3496 cri.go:116] container: {ID:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 Status:running}
	I0813 20:49:15.788631    3496 cri.go:118] skipping 8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 - not in ps
	I0813 20:49:15.788637    3496 cri.go:116] container: {ID:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b Status:running}
	I0813 20:49:15.788644    3496 cri.go:116] container: {ID:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659 Status:running}
	I0813 20:49:15.788652    3496 cri.go:116] container: {ID:9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 Status:stopped}
	I0813 20:49:15.788659    3496 cri.go:118] skipping 9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 - not in ps
	I0813 20:49:15.788666    3496 cri.go:116] container: {ID:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf Status:running}
	I0813 20:49:15.788673    3496 cri.go:116] container: {ID:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6 Status:running}
	I0813 20:49:15.788683    3496 cri.go:116] container: {ID:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f Status:running}
	I0813 20:49:15.788690    3496 cri.go:118] skipping e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f - not in ps
	I0813 20:49:15.788697    3496 cri.go:116] container: {ID:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 Status:running}
	I0813 20:49:15.788704    3496 cri.go:118] skipping f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 - not in ps
	I0813 20:49:15.788752    3496 ssh_runner.go:149] Run: sudo runc pause 66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf
	I0813 20:49:15.810882    3496 ssh_runner.go:149] Run: sudo runc pause 66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf 82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b
	I0813 20:49:15.835084    3496 out.go:177] 
	W0813 20:49:15.835213    3496 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc pause 66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf 82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:49:15Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc pause 66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf 82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:49:15Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 20:49:15.835224    3496 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:49:15.847243    3496 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 20:49:15.848915    3496 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210813204600-30853 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813204600-30853 -n pause-20210813204600-30853
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813204600-30853 -n pause-20210813204600-30853: exit status 2 (270.035821ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813204600-30853 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p pause-20210813204600-30853 logs -n 25: (1.186585608s)
helpers_test.go:253: TestPause/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                      | multinode-20210813202419-30853          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:35:45 UTC | Fri, 13 Aug 2021 20:38:16 UTC |
	|         | multinode-20210813202419-30853          |                                         |         |         |                               |                               |
	|         | --wait=true -v=8                        |                                         |         |         |                               |                               |
	|         | --alsologtostderr                       |                                         |         |         |                               |                               |
	|         | --driver=kvm2                           |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| start   | -p                                      | multinode-20210813202419-30853-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:17 UTC | Fri, 13 Aug 2021 20:39:13 UTC |
	|         | multinode-20210813202419-30853-m03      |                                         |         |         |                               |                               |
	|         | --driver=kvm2                           |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| delete  | -p                                      | multinode-20210813202419-30853-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:13 UTC | Fri, 13 Aug 2021 20:39:14 UTC |
	|         | multinode-20210813202419-30853-m03      |                                         |         |         |                               |                               |
	| -p      | multinode-20210813202419-30853          | multinode-20210813202419-30853          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:14 UTC | Fri, 13 Aug 2021 20:39:16 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| delete  | -p                                      | multinode-20210813202419-30853          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:16 UTC | Fri, 13 Aug 2021 20:39:18 UTC |
	|         | multinode-20210813202419-30853          |                                         |         |         |                               |                               |
	| start   | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:02 UTC | Fri, 13 Aug 2021 20:43:38 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | --wait=true --preload=false             |                                         |         |         |                               |                               |
	|         | --driver=kvm2                           |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0            |                                         |         |         |                               |                               |
	| ssh     | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:38 UTC | Fri, 13 Aug 2021 20:43:41 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | -- sudo crictl pull busybox             |                                         |         |         |                               |                               |
	| start   | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:41 UTC | Fri, 13 Aug 2021 20:44:22 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2          |                                         |         |         |                               |                               |
	|         |  --container-runtime=crio               |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3            |                                         |         |         |                               |                               |
	| ssh     | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:22 UTC | Fri, 13 Aug 2021 20:44:22 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | -- sudo crictl image ls                 |                                         |         |         |                               |                               |
	| -p      | test-preload-20210813204102-30853       | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:22 UTC | Fri, 13 Aug 2021 20:44:24 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| delete  | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:25 UTC | Fri, 13 Aug 2021 20:44:26 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	| start   | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:26 UTC | Fri, 13 Aug 2021 20:45:21 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2             |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| stop    | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:21 UTC | Fri, 13 Aug 2021 20:45:21 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --cancel-scheduled                      |                                         |         |         |                               |                               |
	| stop    | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:34 UTC | Fri, 13 Aug 2021 20:45:42 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --schedule 5s                           |                                         |         |         |                               |                               |
	| delete  | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:59 UTC | Fri, 13 Aug 2021 20:46:00 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	| start   | -p                                      | force-systemd-env-20210813204600-30853  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:47:02 UTC |
	|         | force-systemd-env-20210813204600-30853  |                                         |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | -v=5 --driver=kvm2                      |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| delete  | -p                                      | force-systemd-env-20210813204600-30853  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:02 UTC | Fri, 13 Aug 2021 20:47:03 UTC |
	|         | force-systemd-env-20210813204600-30853  |                                         |         |         |                               |                               |
	| delete  | -p                                      | kubenet-20210813204703-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:03 UTC | Fri, 13 Aug 2021 20:47:03 UTC |
	|         | kubenet-20210813204703-30853            |                                         |         |         |                               |                               |
	| delete  | -p false-20210813204703-30853           | false-20210813204703-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:04 UTC | Fri, 13 Aug 2021 20:47:04 UTC |
	| start   | -p                                      | kubernetes-upgrade-20210813204600-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:47:42 UTC |
	|         | kubernetes-upgrade-20210813204600-30853 |                                         |         |         |                               |                               |
	|         | --memory=2200                           |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0            |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2    |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| stop    | -p                                      | kubernetes-upgrade-20210813204600-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:42 UTC | Fri, 13 Aug 2021 20:47:44 UTC |
	|         | kubernetes-upgrade-20210813204600-30853 |                                         |         |         |                               |                               |
	| start   | -p                                      | offline-crio-20210813204600-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:48:55 UTC |
	|         | offline-crio-20210813204600-30853       |                                         |         |         |                               |                               |
	|         | --alsologtostderr                       |                                         |         |         |                               |                               |
	|         | -v=1 --memory=2048                      |                                         |         |         |                               |                               |
	|         | --wait=true --driver=kvm2               |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| delete  | -p                                      | offline-crio-20210813204600-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:55 UTC | Fri, 13 Aug 2021 20:48:57 UTC |
	|         | offline-crio-20210813204600-30853       |                                         |         |         |                               |                               |
	| start   | -p pause-20210813204600-30853           | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:49:06 UTC |
	|         | --memory=2048                           |                                         |         |         |                               |                               |
	|         | --install-addons=false                  |                                         |         |         |                               |                               |
	|         | --wait=all --driver=kvm2                |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| start   | -p pause-20210813204600-30853           | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:06 UTC | Fri, 13 Aug 2021 20:49:13 UTC |
	|         | --alsologtostderr                       |                                         |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                      |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:49:06
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:49:06.750460    3412 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:49:06.750532    3412 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:06.750535    3412 out.go:311] Setting ErrFile to fd 2...
	I0813 20:49:06.750538    3412 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:06.750645    3412 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:49:06.750968    3412 out.go:305] Setting JSON to false
	I0813 20:49:06.794979    3412 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":9109,"bootTime":1628878638,"procs":188,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:49:06.795299    3412 start.go:121] virtualization: kvm guest
	I0813 20:49:06.798215    3412 out.go:177] * [pause-20210813204600-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:49:06.799922    3412 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:49:06.798386    3412 notify.go:169] Checking for updates...
	I0813 20:49:06.801691    3412 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:49:06.803336    3412 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:49:06.804849    3412 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:49:06.805220    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:06.805637    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.805697    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.817202    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35163
	I0813 20:49:06.817597    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.818173    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.818195    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.818649    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.818887    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.819077    3412 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:49:06.819425    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.819465    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.830844    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0813 20:49:06.831324    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.831848    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.831871    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.832233    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.832415    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.865593    3412 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 20:49:06.865627    3412 start.go:278] selected driver: kvm2
	I0813 20:49:06.865641    3412 start.go:751] validating driver "kvm2" against &{Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:06.865757    3412 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:49:06.866497    3412 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:49:06.866703    3412 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:49:06.878129    3412 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:49:06.878764    3412 cni.go:93] Creating CNI manager for ""
	I0813 20:49:06.878779    3412 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:49:06.878789    3412 start_flags.go:277] config:
	{Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:06.878936    3412 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:49:06.881128    3412 out.go:177] * Starting control plane node pause-20210813204600-30853 in cluster pause-20210813204600-30853
	I0813 20:49:06.881153    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:06.881197    3412 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:49:06.881216    3412 cache.go:56] Caching tarball of preloaded images
	I0813 20:49:06.881339    3412 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:49:06.881361    3412 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:49:06.881476    3412 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/config.json ...
	I0813 20:49:06.881656    3412 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:49:06.881687    3412 start.go:313] acquiring machines lock for pause-20210813204600-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:49:06.881775    3412 start.go:317] acquired machines lock for "pause-20210813204600-30853" in 71.324µs
	I0813 20:49:06.881794    3412 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:49:06.881801    3412 fix.go:55] fixHost starting: 
	I0813 20:49:06.882135    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.882177    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.894411    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0813 20:49:06.894958    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.895630    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.895652    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.896024    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.896206    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.896395    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:06.899827    3412 fix.go:108] recreateIfNeeded on pause-20210813204600-30853: state=Running err=<nil>
	W0813 20:49:06.899844    3412 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:49:05.079802    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:06.902070    3412 out.go:177] * Updating the running kvm2 "pause-20210813204600-30853" VM ...
	I0813 20:49:06.902100    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.902283    3412 machine.go:88] provisioning docker machine ...
	I0813 20:49:06.902305    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.902430    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:06.902571    3412 buildroot.go:166] provisioning hostname "pause-20210813204600-30853"
	I0813 20:49:06.902599    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:06.902737    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:06.908023    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:06.908395    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:06.908431    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:06.908509    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:06.908703    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:06.908861    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:06.908990    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:06.909175    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:06.909381    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:06.909399    3412 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210813204600-30853 && echo "pause-20210813204600-30853" | sudo tee /etc/hostname
	I0813 20:49:07.062168    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210813204600-30853
	
	I0813 20:49:07.062210    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.068189    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.068544    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.068577    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.068759    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.068953    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.069117    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.069259    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.069439    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:07.069612    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:07.069649    3412 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210813204600-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210813204600-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210813204600-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:49:07.221530    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:49:07.221612    3412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:49:07.221648    3412 buildroot.go:174] setting up certificates
	I0813 20:49:07.221660    3412 provision.go:83] configureAuth start
	I0813 20:49:07.221672    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:07.221918    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:07.227471    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.227839    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.227868    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.228085    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.232869    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.233213    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.233251    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.233347    3412 provision.go:138] copyHostCerts
	I0813 20:49:07.233436    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:49:07.233450    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:49:07.233511    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:49:07.233650    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:49:07.233667    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:49:07.233695    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:49:07.233774    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:49:07.233784    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:49:07.233812    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:49:07.233859    3412 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.pause-20210813204600-30853 san=[192.168.39.61 192.168.39.61 localhost 127.0.0.1 minikube pause-20210813204600-30853]
	I0813 20:49:07.320299    3412 provision.go:172] copyRemoteCerts
	I0813 20:49:07.320390    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:49:07.320428    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.325783    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.326112    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.326152    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.326310    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.326478    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.326610    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.326733    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:07.427180    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:49:07.450672    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0813 20:49:07.471272    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:49:07.489660    3412 provision.go:86] duration metric: configureAuth took 267.984336ms
	I0813 20:49:07.489686    3412 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:49:07.489862    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:07.489982    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.495300    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.495618    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.495653    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.495797    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.495985    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.496150    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.496279    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.496434    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:07.496609    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:07.496631    3412 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:49:08.602797    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:49:08.602830    3412 machine.go:91] provisioned docker machine in 1.700528876s
	I0813 20:49:08.602841    3412 start.go:267] post-start starting for "pause-20210813204600-30853" (driver="kvm2")
	I0813 20:49:08.602846    3412 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:49:08.602880    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.603196    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:49:08.603247    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.608420    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.608704    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.608735    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.608875    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.609064    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.609198    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.609343    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:08.709733    3412 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:49:08.715709    3412 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:49:08.715731    3412 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:49:08.715792    3412 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:49:08.715871    3412 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 20:49:08.715956    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:49:08.724293    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:49:08.750217    3412 start.go:270] post-start completed in 147.362269ms
	I0813 20:49:08.750260    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.750492    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.756215    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.756621    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.756650    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.756812    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.757034    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.757170    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.757300    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.757480    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:08.757670    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:08.757683    3412 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 20:49:08.900897    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628887748.901369788
	
	I0813 20:49:08.900932    3412 fix.go:212] guest clock: 1628887748.901369788
	I0813 20:49:08.900944    3412 fix.go:225] Guest: 2021-08-13 20:49:08.901369788 +0000 UTC Remote: 2021-08-13 20:49:08.750472863 +0000 UTC m=+2.052052145 (delta=150.896925ms)
	I0813 20:49:08.900988    3412 fix.go:196] guest clock delta is within tolerance: 150.896925ms
	I0813 20:49:08.900996    3412 fix.go:57] fixHost completed within 2.019194265s
	I0813 20:49:08.901002    3412 start.go:80] releasing machines lock for "pause-20210813204600-30853", held for 2.019216553s
	I0813 20:49:08.901046    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.901309    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:08.906817    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.907191    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.907257    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.907379    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.907574    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.908140    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.908391    3412 ssh_runner.go:149] Run: systemctl --version
	I0813 20:49:08.908418    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.908488    3412 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:49:08.908539    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.915229    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.915547    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.915580    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.915727    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.915920    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.916011    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.916080    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.916237    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:08.916429    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.916461    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.916636    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.916784    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.917107    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.917257    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:09.014176    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:09.014353    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:09.061257    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:09.061287    3412 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:49:09.061352    3412 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:49:09.075880    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:49:09.086949    3412 docker.go:153] disabling docker service ...
	I0813 20:49:09.087012    3412 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:49:09.103245    3412 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:49:09.117178    3412 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:49:09.373507    3412 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:49:09.585738    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:49:09.599794    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:49:09.615240    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:49:09.623727    3412 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:49:09.630919    3412 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:49:09.637747    3412 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:49:09.808564    3412 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:49:09.952030    3412 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:49:09.952144    3412 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:49:09.959400    3412 start.go:413] Will wait 60s for crictl version
	I0813 20:49:09.959452    3412 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:49:09.991124    3412 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 20:49:09.991251    3412 ssh_runner.go:149] Run: crio --version
	I0813 20:49:10.280528    3412 ssh_runner.go:149] Run: crio --version
	I0813 20:49:10.528655    3412 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 20:49:10.528694    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:10.534359    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:10.534782    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:10.534815    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:10.535076    3412 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 20:49:10.539953    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:10.540017    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:10.583397    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:10.583419    3412 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:49:10.583459    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:10.620617    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:10.620642    3412 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:49:10.620703    3412 ssh_runner.go:149] Run: crio config
	I0813 20:49:10.896405    3412 cni.go:93] Creating CNI manager for ""
	I0813 20:49:10.896427    3412 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:49:10.896436    3412 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:49:10.896448    3412 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.61 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210813204600-30853 NodeName:pause-20210813204600-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.61 CgroupDriver:systemd ClientCAFile:/var/lib/m
inikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:49:10.896629    3412 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "pause-20210813204600-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:49:10.896754    3412 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=pause-20210813204600-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.61 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:49:10.896819    3412 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:49:10.911638    3412 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:49:10.911723    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:49:10.920269    3412 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (506 bytes)
	I0813 20:49:10.933623    3412 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:49:10.945877    3412 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I0813 20:49:10.958716    3412 ssh_runner.go:149] Run: grep 192.168.39.61	control-plane.minikube.internal$ /etc/hosts
	I0813 20:49:10.962845    3412 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853 for IP: 192.168.39.61
	I0813 20:49:10.962912    3412 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:49:10.962936    3412 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:49:10.963041    3412 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.key
	I0813 20:49:10.963067    3412 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.key.e9ce627b
	I0813 20:49:10.963088    3412 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.key
	I0813 20:49:10.963223    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 20:49:10.963274    3412 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 20:49:10.963290    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:49:10.963332    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:49:10.963362    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:49:10.963395    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:49:10.963481    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:49:10.964763    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:49:10.996208    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:49:11.015193    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:49:11.032382    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:49:11.050461    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:49:11.067415    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:49:11.085267    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:49:11.102588    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:49:11.128113    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:49:11.146008    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 20:49:11.162723    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 20:49:11.181637    3412 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:49:11.193799    3412 ssh_runner.go:149] Run: openssl version
	I0813 20:49:11.199783    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 20:49:11.209928    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.214459    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.214508    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.221207    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:49:11.229476    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:49:11.237550    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.245454    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.245501    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.251754    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:49:11.258461    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 20:49:11.267146    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.271736    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.271779    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.278000    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 20:49:11.284415    3412 kubeadm.go:390] StartCluster: {Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clu
sterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:11.284518    3412 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:49:11.284561    3412 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:49:11.324305    3412 cri.go:76] found id: "d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6"
	I0813 20:49:11.324324    3412 cri.go:76] found id: "2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164"
	I0813 20:49:11.324329    3412 cri.go:76] found id: "ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf"
	I0813 20:49:11.324336    3412 cri.go:76] found id: "66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf"
	I0813 20:49:11.324339    3412 cri.go:76] found id: "83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659"
	I0813 20:49:11.324343    3412 cri.go:76] found id: "82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b"
	I0813 20:49:11.324347    3412 cri.go:76] found id: ""
	I0813 20:49:11.324383    3412 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:49:11.370394    3412 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","pid":3260,"status":"running","bundle":"/run/containers/storage/overlay-containers/2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164/userdata","rootfs":"/var/lib/containers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","created":"2021-08-13T20:48:25.650799846Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7bfe6d1f","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7bfe6d1f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termination
MessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.433420822Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/c
ontainers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet
/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/containers/kube-proxy/b214a802\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~projected/kube-api-access-qrwsr\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.prop
erty.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","pid":2560,"status":"running","bundle":"/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata","rootfs":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","created":"2021-08-13T20:47:58.451921584Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170563888Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podace00bb4fb8a8a9569ff7dae47e01d30.slice","io.kubernetes.cri-o.ContainerID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.Contai
nerName":"k8s_POD_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.734913609Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30
853_ace00bb4fb8a8a9569ff7dae47e01d30/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210813204600-30853\",\"uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c3
9b238025a67ffbc7ea","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","pid":3063,"status":"running","bundle":"/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata","rootfs":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe9
37ca4df37/merged","created":"2021-08-13T20:48:24.164151322Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.030706859Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod7c8a1bad_1f97_44ad_a3e3_fb9d52cfd0d9.slice","io.kubernetes.cri-o.ContainerID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.009794742Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/hostname","i
o.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-4n8kb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"7cdcb64568\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-4n8kb\",\"uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe937ca4df37/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes
.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/shm","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactiv
e-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","pid":2712,"status":"running","bundle":"/run/containers/storage/overlay-containers/66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf/userdata","rootfs":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","created":"2021-08-13T20:48:00.371988051Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.05184871Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30853_ace00bb4fb8a8a9569ff7dae47e01d30/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"ku
be-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/etc-hosts\",\"readonly\":false},{\"cont
ainer_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/containers/kube-scheduler/1a90a935\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","pid":2531,"status":"running","bundle":"/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af7
8ce2fb71d82b52d87fa45aaf3/userdata","rootfs":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","created":"2021-08-13T20:47:58.134632094Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b586eaff819d4c98a938914befbf359d\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170560054Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podb586eaff819d4c98a938914befbf359d.slice","io.kubernetes.cri-o.ContainerID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.58849323Z","io.kubernetes.cri-o.HostName":"pause-20210
813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210813204600-30853\",\"uid\":\"b586eaff81
9d4c98a938914befbf359d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d8
2b52d87fa45aaf3/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","pid":3202,"status":"running","bundle":"/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata","rootfs":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","created":"2021-08-13T20:48:25.02088557Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/co
nfig.seen\":\"2021-08-13T20:48:23.684666458Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth769c0295\",\"mac\":\"0e:7f:8d:fd:2a:c5\"},{\"name\":\"eth0\",\"mac\":\"46:39:40:9e:ad:d7\",\"sandbox\":\"/var/run/netns/70e99836-e661-4e4f-bfb4-1e8d94b25ad2\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod72033717_35d7_4397_b3c5_28028e7270f3.slice","io.kubernetes.cri-o.ContainerID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.356545063Z","io.kubernetes.cri-o.H
ostName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grvm\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-4grvm\",\"uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.M
ountPoint":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"k
ube-system","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","pid":2593,"status":"running","bundle":"/run/containers/storage/overlay-containers/82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b/userdata","rootfs":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","created":"2021-08-13T20:47:59.106710832Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"46519583","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"Fi
le","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"46519583\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:58.700311118Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.ui
d\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kube
rnetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/containers/kube-apiserver/d05226bf\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.
61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","pid":2654,"status":"running","bundle":"/run/containers/storage/overlay-containers/83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659/userdata","rootfs":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","created":"2021-08-13T20:47:59.879440634Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dfe11a","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePol
icy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dfe11a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:59.302380713Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kub
e-system\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","
io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/containers/kube-controller-manager/3fd07eff\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/v
olume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata","rootfs":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","created":"2021-08-13T20:
48:24.985669139Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.664842879Z\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth8015c076\",\"mac\":\"b6:65:b6:ec:41:c5\"},{\"name\":\"eth0\",\"mac\":\"e2:c2:94:2c:86:54\",\"sandbox\":\"/var/run/netns/18863c2e-48ba-4850-8146-8e155524b6dd\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.3/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod21759cc2_1fdb_417f_bc71_01fb6f9d0c35.slice","io.kubernetes.cri-o.ContainerID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-41
7f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.319998358Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-gm2bv\",\"k8s-app\":\"kube-dns\",\"pod-template-hash\":\"558bd4d5db\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-gm2bv_21759cc2-1fdb-417f-bc71-01fb6f9d0c35/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540.log","io.kubernetes.cri-
o.Metadata":"{\"name\":\"coredns-558bd4d5db-gm2bv\",\"uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-417f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9f35d968
848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-gm2bv","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"21759cc2-1fdb-417f-bc71-01fb6f9d0c35","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.664842879Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf","pid":2754,"status":"running","bundle":"/run/containers/storage/overlay-containers/ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf/userdata","rootfs":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","created":"2021-08-13T20:48:00.893103098Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5d26fc81","io.kubernetes.container.name":"etcd","io.kubernetes.container
.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5d26fc81\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.424653769Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.p
od.name\":\"etcd-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.SeccompProf
ilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/containers/etcd/7df814d9\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d2
2eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","pid":3295,"status":"running","bundle":"/run/containers/storage/overlay-containers/d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6/userdata","rootfs":"/var/lib/containers/storage/overlay/6c5dd04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","created":"2021-08-13T20:48:25.853932123Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"861ab352","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.
kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"861ab352\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.56946163Z","io.kubernetes.cri-o.IP.0":"10
.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grvm\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6c5dd04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/
storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/containers/coredns/baf35c8d\",\"readonly\":false},{\"container_path\
":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~projected/kube-api-access-zsj85\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","pid":2552,"status":"running","bundle":"/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata","rootfs":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","created"
:"2021-08-13T20:47:58.569818878Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.61:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170566946Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod545d21e989d5ed3752d22eeb8bd8ffce.slice","io.kubernetes.cri-o.ContainerID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.638411495Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/sto
rage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"etcd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210813204600-30853\",\"uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","io.kubernet
es.cri-o.Name":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d22eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","pid":2497,"status":"running","bundle":"/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata","rootfs":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","created":"2021-08-13T20:47:57.759478731Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170508472Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"cb76671b6b79a1d55244
9a94a3dbfa98\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.61:8443\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice","io.kubernetes.cri-o.ContainerID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.128395566Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",
\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210813204600-30853\",\"uid\":\"cb76671b6b79a1d552449a94a3dbfa98\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]",
"io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode"
:"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0813 20:49:11.370977    3412 cri.go:113] list returned 13 containers
	I0813 20:49:11.370992    3412 cri.go:116] container: {ID:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 Status:running}
	I0813 20:49:11.371004    3412 cri.go:122] skipping {2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 running}: state = "running", want "paused"
	I0813 20:49:11.371014    3412 cri.go:116] container: {ID:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea Status:running}
	I0813 20:49:11.371019    3412 cri.go:118] skipping 55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea - not in ps
	I0813 20:49:11.371023    3412 cri.go:116] container: {ID:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 Status:running}
	I0813 20:49:11.371028    3412 cri.go:118] skipping 564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 - not in ps
	I0813 20:49:11.371034    3412 cri.go:116] container: {ID:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf Status:running}
	I0813 20:49:11.371040    3412 cri.go:122] skipping {66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf running}: state = "running", want "paused"
	I0813 20:49:11.371048    3412 cri.go:116] container: {ID:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 Status:running}
	I0813 20:49:11.371054    3412 cri.go:118] skipping 6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 - not in ps
	I0813 20:49:11.371063    3412 cri.go:116] container: {ID:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 Status:running}
	I0813 20:49:11.371069    3412 cri.go:118] skipping 8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 - not in ps
	I0813 20:49:11.371076    3412 cri.go:116] container: {ID:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b Status:running}
	I0813 20:49:11.371081    3412 cri.go:122] skipping {82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b running}: state = "running", want "paused"
	I0813 20:49:11.371087    3412 cri.go:116] container: {ID:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659 Status:running}
	I0813 20:49:11.371091    3412 cri.go:122] skipping {83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659 running}: state = "running", want "paused"
	I0813 20:49:11.371099    3412 cri.go:116] container: {ID:9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 Status:stopped}
	I0813 20:49:11.371105    3412 cri.go:118] skipping 9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 - not in ps
	I0813 20:49:11.371110    3412 cri.go:116] container: {ID:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf Status:running}
	I0813 20:49:11.371115    3412 cri.go:122] skipping {ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf running}: state = "running", want "paused"
	I0813 20:49:11.371119    3412 cri.go:116] container: {ID:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6 Status:running}
	I0813 20:49:11.371127    3412 cri.go:122] skipping {d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6 running}: state = "running", want "paused"
	I0813 20:49:11.371135    3412 cri.go:116] container: {ID:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f Status:running}
	I0813 20:49:11.371144    3412 cri.go:118] skipping e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f - not in ps
	I0813 20:49:11.371154    3412 cri.go:116] container: {ID:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 Status:running}
	I0813 20:49:11.371164    3412 cri.go:118] skipping f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 - not in ps
	I0813 20:49:11.371203    3412 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:49:11.379585    3412 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:49:11.379610    3412 kubeadm.go:600] restartCluster start
	I0813 20:49:11.379656    3412 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:49:11.387273    3412 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:49:11.388131    3412 kubeconfig.go:93] found "pause-20210813204600-30853" server: "https://192.168.39.61:8443"
	I0813 20:49:11.389906    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.391540    3412 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:49:11.398645    3412 api_server.go:164] Checking apiserver status ...
	I0813 20:49:11.398727    3412 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:49:11.410339    3412 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2593/cgroup
	I0813 20:49:11.416825    3412 api_server.go:180] apiserver freezer: "11:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice/crio-82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b.scope"
	I0813 20:49:11.416874    3412 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice/crio-82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b.scope/freezer.state
	I0813 20:49:11.424153    3412 api_server.go:202] freezer state: "THAWED"
	I0813 20:49:11.424172    3412 api_server.go:239] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0813 20:49:11.430386    3412 api_server.go:265] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0813 20:49:11.447400    3412 system_pods.go:86] 6 kube-system pods found
	I0813 20:49:11.447439    3412 system_pods.go:89] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:11.447446    3412 system_pods.go:89] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:11.447453    3412 system_pods.go:89] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:11.447457    3412 system_pods.go:89] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:11.447460    3412 system_pods.go:89] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:11.447465    3412 system_pods.go:89] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:11.448566    3412 api_server.go:139] control plane version: v1.21.3
	I0813 20:49:11.448586    3412 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.39.61
	I0813 20:49:11.448597    3412 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0813 20:49:11.448603    3412 kubeadm.go:604] restartCluster took 68.987456ms
	I0813 20:49:11.448610    3412 kubeadm.go:392] StartCluster complete in 164.201481ms
	I0813 20:49:11.448627    3412 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:49:11.448743    3412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:49:11.449587    3412 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:49:11.450509    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.454641    3412 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210813204600-30853" rescaled to 1
	I0813 20:49:11.454698    3412 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:49:11.454707    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:49:11.456952    3412 out.go:177] * Verifying Kubernetes components...
	I0813 20:49:11.457008    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:11.454754    3412 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:49:11.457069    3412 addons.go:59] Setting storage-provisioner=true in profile "pause-20210813204600-30853"
	I0813 20:49:11.455000    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:11.457090    3412 addons.go:135] Setting addon storage-provisioner=true in "pause-20210813204600-30853"
	W0813 20:49:11.457098    3412 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:49:11.457112    3412 addons.go:59] Setting default-storageclass=true in profile "pause-20210813204600-30853"
	I0813 20:49:11.457130    3412 host.go:66] Checking if "pause-20210813204600-30853" exists ...
	I0813 20:49:11.457136    3412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210813204600-30853"
	I0813 20:49:11.457449    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.457490    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.457642    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.457688    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.468728    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0813 20:49:11.469146    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.469685    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.469705    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.470063    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.470584    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.470626    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.476732    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0813 20:49:11.477171    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.477677    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.477701    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.478079    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.478277    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.482479    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.483740    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45299
	I0813 20:49:11.484114    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.484536    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.484555    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.484941    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.485097    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.487884    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:11.490267    3412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:49:11.488882    3412 addons.go:135] Setting addon default-storageclass=true in "pause-20210813204600-30853"
	W0813 20:49:11.490289    3412 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:49:11.490323    3412 host.go:66] Checking if "pause-20210813204600-30853" exists ...
	I0813 20:49:11.490374    3412 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:49:11.490389    3412 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:49:11.490406    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:11.490689    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.490728    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.496655    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.497065    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:11.497093    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.497244    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:11.497423    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:11.497618    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:11.497767    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:11.503422    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34471
	I0813 20:49:11.503821    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.504277    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.504306    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.504582    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.505173    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.505219    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.518799    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36859
	I0813 20:49:11.519214    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.519629    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.519655    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.519995    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.520180    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.523435    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:11.523650    3412 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:49:11.523666    3412 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:49:11.523682    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:11.529028    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.529396    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:11.529423    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.529571    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:11.529736    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:11.529865    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:11.530004    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:11.605965    3412 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:49:11.606090    3412 node_ready.go:35] waiting up to 6m0s for node "pause-20210813204600-30853" to be "Ready" ...
	I0813 20:49:11.610421    3412 node_ready.go:49] node "pause-20210813204600-30853" has status "Ready":"True"
	I0813 20:49:11.610442    3412 node_ready.go:38] duration metric: took 4.320432ms waiting for node "pause-20210813204600-30853" to be "Ready" ...
	I0813 20:49:11.610453    3412 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:49:11.616546    3412 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:49:11.616740    3412 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.631733    3412 pod_ready.go:92] pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.631757    3412 pod_ready.go:81] duration metric: took 14.992576ms waiting for pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.631771    3412 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.639091    3412 pod_ready.go:92] pod "etcd-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.639117    3412 pod_ready.go:81] duration metric: took 7.33748ms waiting for pod "etcd-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.639129    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.645487    3412 pod_ready.go:92] pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.645508    3412 pod_ready.go:81] duration metric: took 6.370538ms waiting for pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.645519    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.652583    3412 pod_ready.go:92] pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.652602    3412 pod_ready.go:81] duration metric: took 7.073719ms waiting for pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.652614    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4n8kb" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.658710    3412 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:49:12.038755    3412 pod_ready.go:92] pod "kube-proxy-4n8kb" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:12.038776    3412 pod_ready.go:81] duration metric: took 386.155583ms waiting for pod "kube-proxy-4n8kb" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.038787    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.069005    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069032    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069056    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069036    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069332    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069333    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069336    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069348    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069357    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069364    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069368    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069371    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069377    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069380    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069631    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069649    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069664    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069635    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069693    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069706    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069717    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069914    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069931    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.071889    3412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:49:12.071910    3412 addons.go:344] enableAddons completed in 617.161828ms
	I0813 20:49:12.434704    3412 pod_ready.go:92] pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:12.434726    3412 pod_ready.go:81] duration metric: took 395.931948ms waiting for pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.434734    3412 pod_ready.go:38] duration metric: took 824.269103ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:49:12.434752    3412 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:49:12.434790    3412 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:49:12.451457    3412 api_server.go:70] duration metric: took 996.725767ms to wait for apiserver process to appear ...
	I0813 20:49:12.451487    3412 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:49:12.451500    3412 api_server.go:239] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0813 20:49:12.457776    3412 api_server.go:265] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0813 20:49:12.458697    3412 api_server.go:139] control plane version: v1.21.3
	I0813 20:49:12.458716    3412 api_server.go:129] duration metric: took 7.221294ms to wait for apiserver health ...
	I0813 20:49:12.458726    3412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:49:12.637203    3412 system_pods.go:59] 7 kube-system pods found
	I0813 20:49:12.637240    3412 system_pods.go:61] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:12.637248    3412 system_pods.go:61] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:12.637254    3412 system_pods.go:61] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:12.637261    3412 system_pods.go:61] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:12.637266    3412 system_pods.go:61] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:12.637272    3412 system_pods.go:61] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:12.637281    3412 system_pods.go:61] "storage-provisioner" [aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:49:12.637290    3412 system_pods.go:74] duration metric: took 178.557519ms to wait for pod list to return data ...
	I0813 20:49:12.637299    3412 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:49:12.841324    3412 default_sa.go:45] found service account: "default"
	I0813 20:49:12.841350    3412 default_sa.go:55] duration metric: took 204.040505ms for default service account to be created ...
	I0813 20:49:12.841359    3412 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:49:13.042158    3412 system_pods.go:86] 7 kube-system pods found
	I0813 20:49:13.042205    3412 system_pods.go:89] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:13.042216    3412 system_pods.go:89] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:13.042224    3412 system_pods.go:89] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:13.042237    3412 system_pods.go:89] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:13.042245    3412 system_pods.go:89] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:13.042257    3412 system_pods.go:89] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:13.042278    3412 system_pods.go:89] "storage-provisioner" [aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:49:13.042295    3412 system_pods.go:126] duration metric: took 200.930278ms to wait for k8s-apps to be running ...
	I0813 20:49:13.042313    3412 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:49:13.042369    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:13.056816    3412 system_svc.go:56] duration metric: took 14.491659ms WaitForService to wait for kubelet.
	I0813 20:49:13.056852    3412 kubeadm.go:547] duration metric: took 1.60212918s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:49:13.056882    3412 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:49:13.236184    3412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:49:13.236241    3412 node_conditions.go:123] node cpu capacity is 2
	I0813 20:49:13.236260    3412 node_conditions.go:105] duration metric: took 179.373183ms to run NodePressure ...
	I0813 20:49:13.236273    3412 start.go:231] waiting for startup goroutines ...
	I0813 20:49:13.296415    3412 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:49:13.298518    3412 out.go:177] * Done! kubectl is now configured to use "pause-20210813204600-30853" cluster and "default" namespace by default
	I0813 20:49:10.080830    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:49:10.579566    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:14.540519    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": read tcp 192.168.50.1:40792->192.168.50.24:8443: read: connection reset by peer
	I0813 20:49:14.579739    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:14.580451    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:47:17 UTC, end at Fri 2021-08-13 20:49:16 UTC. --
	Aug 13 20:49:15 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:15.738538028Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="go-grpc-middleware/chain.go:25" id=7b05a5b8-8bca-4657-a0c0-6a0fe1359c45 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.397465708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4918e254-9a17-4efa-af14-b2fcce96fefe name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.397525861Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4918e254-9a17-4efa-af14-b2fcce96fefe name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.398030993Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4918e254-9a17-4efa-af14-b2fcce96fefe name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.446076385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bdec152e-0a2a-47d6-8552-d183cf2c7f5f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.446307502Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bdec152e-0a2a-47d6-8552-d183cf2c7f5f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.446452014Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bdec152e-0a2a-47d6-8552-d183cf2c7f5f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.488653910Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7d4951c4-fa09-435f-8ada-c8e7be687375 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.488889647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7d4951c4-fa09-435f-8ada-c8e7be687375 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.489122104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7d4951c4-fa09-435f-8ada-c8e7be687375 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.530310275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=30cbb71b-f28f-46bd-9530-2c2a986dae6f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.530501834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=30cbb71b-f28f-46bd-9530-2c2a986dae6f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.530669856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=30cbb71b-f28f-46bd-9530-2c2a986dae6f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.572161903Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b20ca4e1-e442-4b3d-86e2-69aef6e574cf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.572451429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b20ca4e1-e442-4b3d-86e2-69aef6e574cf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.572627451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b20ca4e1-e442-4b3d-86e2-69aef6e574cf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.616458087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c69b5dc8-719e-4fd0-921c-d0d953780282 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.616518574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c69b5dc8-719e-4fd0-921c-d0d953780282 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.616669384Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c69b5dc8-719e-4fd0-921c-d0d953780282 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.653479202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c7f7d6c4-b689-4cbb-b417-b9b50e710e15 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.653550082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c7f7d6c4-b689-4cbb-b417-b9b50e710e15 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.653704623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c7f7d6c4-b689-4cbb-b417-b9b50e710e15 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.697052062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5cf13dc9-5a83-4d7c-b8a4-99d8a577cf88 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.697136603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5cf13dc9-5a83-4d7c-b8a4-99d8a577cf88 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:16 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:16.697481341Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5cf13dc9-5a83-4d7c-b8a4-99d8a577cf88 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	10dab2af99578       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       0                   2a6ab48b5042a
	d33287457e451       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   50 seconds ago       Running             coredns                   0                   8088cc5d3d38a
	2e50c328d7104       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   51 seconds ago       Running             kube-proxy                0                   564d5f18f75ed
	ac4bf726a8a57       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   About a minute ago   Running             etcd                      0                   e992003133001
	66655950d3afa       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   About a minute ago   Running             kube-scheduler            0                   55ddf08f50f8c
	83df9633ff352       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   About a minute ago   Running             kube-controller-manager   0                   6c56d5bf50b7a
	82d4de99d88e5       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   About a minute ago   Running             kube-apiserver            0                   f228ab759c26a
	
	* 
	* ==> coredns [d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	[INFO] Reloading complete
	I0813 20:48:56.155624       1 trace.go:205] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.152) (total time: 30002ms):
	Trace[1427131847]: [30.002619331s] [30.002619331s] END
	E0813 20:48:56.155739       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:48:56.155858       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.154) (total time: 30001ms):
	Trace[911902081]: [30.001733139s] [30.001733139s] END
	E0813 20:48:56.155865       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:48:56.155918       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.152) (total time: 30002ms):
	Trace[2019727887]: [30.002706635s] [30.002706635s] END
	E0813 20:48:56.156104       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210813204600-30853
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210813204600-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=pause-20210813204600-30853
	                    minikube.k8s.io/updated_at=2021_08_13T20_48_11_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:48:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210813204600-30853
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:49:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    pause-20210813204600-30853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	System Info:
	  Machine ID:                 07e647a52575478182b10082d1b9460a
	  System UUID:                07e647a5-2575-4781-82b1-0082d1b9460a
	  Boot ID:                    1c1f8243-ce7f-455c-a669-de6493424040
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-4grvm                              100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     53s
	  kube-system                 etcd-pause-20210813204600-30853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         59s
	  kube-system                 kube-apiserver-pause-20210813204600-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-controller-manager-pause-20210813204600-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-proxy-4n8kb                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-scheduler-pause-20210813204600-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 storage-provisioner                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  82s (x6 over 82s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x5 over 82s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x5 over 82s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientPID
	  Normal  Starting                 60s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                58s                kubelet     Node pause-20210813204600-30853 status is now: NodeReady
	  Normal  Starting                 50s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	*               If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000025] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +5.165176] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.050992] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.137498] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1726 comm=systemd-network
	[  +1.376463] vboxguest: loading out-of-tree module taints kernel.
	[  +0.007022] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.624786] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +20.400328] systemd-fstab-generator[2162]: Ignoring "noauto" for root device
	[  +0.134832] systemd-fstab-generator[2175]: Ignoring "noauto" for root device
	[  +0.282454] systemd-fstab-generator[2201]: Ignoring "noauto" for root device
	[  +6.552961] systemd-fstab-generator[2405]: Ignoring "noauto" for root device
	[Aug13 20:48] systemd-fstab-generator[2800]: Ignoring "noauto" for root device
	[ +13.894926] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.479825] kauditd_printk_skb: 80 callbacks suppressed
	[Aug13 20:49] kauditd_printk_skb: 14 callbacks suppressed
	[  +4.187207] systemd-fstab-generator[4013]: Ignoring "noauto" for root device
	[  +0.260965] systemd-fstab-generator[4026]: Ignoring "noauto" for root device
	[  +0.242550] systemd-fstab-generator[4048]: Ignoring "noauto" for root device
	[  +3.941917] systemd-fstab-generator[4299]: Ignoring "noauto" for root device
	[  +0.801138] systemd-fstab-generator[4353]: Ignoring "noauto" for root device
	[  +1.042940] systemd-fstab-generator[4407]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf] <==
	* 2021-08-13 20:48:01.922733 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:48:01.952757 I | embed: serving client requests on 192.168.39.61:2379
	2021-08-13 20:48:01.954160 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:48:01.975055 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:48:12.629799 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (355.071918ms) to execute
	2021-08-13 20:48:18.621673 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" " with result "range_response_count:0 size:5" took too long (1.837036221s) to execute
	2021-08-13 20:48:18.622362 W | wal: sync duration of 1.607346013s, expected less than 1s
	2021-08-13 20:48:18.623060 W | etcdserver: request "header:<ID:12771218163585540132 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-20210813204600-30853.169af8bae7fa23bf\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-20210813204600-30853.169af8bae7fa23bf\" value_size:632 lease:3547846126730764118 >> failure:<>>" with result "size:16" took too long (1.606807479s) to execute
	2021-08-13 20:48:18.624926 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.461501725s) to execute
	2021-08-13 20:48:18.628021 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210813204600-30853\" " with result "range_response_count:1 size:3982" took too long (1.370325429s) to execute
	2021-08-13 20:48:21.346921 W | wal: sync duration of 1.299304523s, expected less than 1s
	2021-08-13 20:48:21.347401 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.068677828s) to execute
	2021-08-13 20:48:24.481477 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:26.500706 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813204600-30853\" " with result "range_response_count:1 size:6093" took too long (233.724165ms) to execute
	2021-08-13 20:48:26.501137 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-gm2bv\" " with result "range_response_count:1 size:4473" took too long (378.683681ms) to execute
	2021-08-13 20:48:26.502059 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-4grvm\" " with result "range_response_count:1 size:4461" took too long (270.883259ms) to execute
	2021-08-13 20:48:28.869625 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:38.868019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:48.868044 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:58.870803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:49:00.399177 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (1.157615469s) to execute
	2021-08-13 20:49:00.400612 W | etcdserver: request "header:<ID:12771218163585540646 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" mod_revision:468 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" value_size:584 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" > >>" with result "size:16" took too long (200.747119ms) to execute
	2021-08-13 20:49:00.400917 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813204600-30853\" " with result "range_response_count:1 size:6093" took too long (1.158534213s) to execute
	2021-08-13 20:49:00.401297 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (569.698ms) to execute
	2021-08-13 20:49:08.868736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:49:17 up 2 min,  0 users,  load average: 1.86, 0.77, 0.29
	Linux pause-20210813204600-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b] <==
	* Trace[1175388272]: [1.383273804s] [1.383273804s] END
	I0813 20:48:18.647776       1 trace.go:205] Trace[1480647024]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.255) (total time: 1391ms):
	Trace[1480647024]: ---"Object stored in database" 1379ms (20:48:00.638)
	Trace[1480647024]: [1.391864844s] [1.391864844s] END
	I0813 20:48:18.651341       1 trace.go:205] Trace[532588033]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.256) (total time: 1395ms):
	Trace[532588033]: [1.395160654s] [1.395160654s] END
	I0813 20:48:18.651913       1 trace.go:205] Trace[486245217]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.256) (total time: 1395ms):
	Trace[486245217]: [1.395849853s] [1.395849853s] END
	I0813 20:48:18.659173       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:48:21.348539       1 trace.go:205] Trace[264690694]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:20.278) (total time: 1070ms):
	Trace[264690694]: [1.070400996s] [1.070400996s] END
	I0813 20:48:22.995388       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:48:23.545730       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:48:37.713151       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:48:37.713388       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:48:37.713410       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:49:00.401993       1 trace.go:205] Trace[875370503]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:48:59.240) (total time: 1161ms):
	Trace[875370503]: ---"About to write a response" 1161ms (20:49:00.401)
	Trace[875370503]: [1.161749328s] [1.161749328s] END
	I0813 20:49:00.403705       1 trace.go:205] Trace[1375945297]: "Get" url:/api/v1/nodes/pause-20210813204600-30853,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.39.1,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:48:59.241) (total time: 1162ms):
	Trace[1375945297]: ---"About to write a response" 1161ms (20:49:00.403)
	Trace[1375945297]: [1.162052238s] [1.162052238s] END
	I0813 20:49:08.639766       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:49:08.639943       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:49:08.639963       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659] <==
	* I0813 20:48:22.670523       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0813 20:48:22.676047       1 shared_informer.go:247] Caches are synced for job 
	I0813 20:48:22.676648       1 shared_informer.go:247] Caches are synced for GC 
	I0813 20:48:22.680632       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0813 20:48:22.680827       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0813 20:48:22.713877       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0813 20:48:22.743162       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:48:22.743798       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:48:22.849717       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0813 20:48:22.888695       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:48:22.888733       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:48:22.923738       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:48:22.923844       1 disruption.go:371] Sending events to api server.
	I0813 20:48:22.939921       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:48:23.006118       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4n8kb"
	E0813 20:48:23.080425       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"4ec5a127-3b2a-4f66-8321-f0bab85709c0", ResourceVersion:"304", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764484491, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000abfda0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000abfdb8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc0014a9280), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00142b740), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000abf
dd0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000abfde8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014a92c0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001419440), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00144e5a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000843e30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00163c430)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00144e608)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0813 20:48:23.316478       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:48:23.352329       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:48:23.352427       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:48:23.554638       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:48:23.583893       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:48:23.645559       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-gm2bv"
	I0813 20:48:23.652683       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-4grvm"
	I0813 20:48:23.772425       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-gm2bv"
	
	* 
	* ==> kube-proxy [2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164] <==
	* I0813 20:48:26.523023       1 node.go:172] Successfully retrieved node IP: 192.168.39.61
	I0813 20:48:26.523578       1 server_others.go:140] Detected node IP 192.168.39.61
	W0813 20:48:26.523867       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:48:26.597173       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:48:26.597466       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:48:26.597629       1 server_others.go:212] Using iptables Proxier.
	I0813 20:48:26.599876       1 server.go:643] Version: v1.21.3
	I0813 20:48:26.601871       1 config.go:315] Starting service config controller
	I0813 20:48:26.601925       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:48:26.601964       1 config.go:224] Starting endpoint slice config controller
	I0813 20:48:26.601993       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:48:26.626937       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:48:26.631306       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:48:26.702322       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:48:26.702322       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf] <==
	* E0813 20:48:07.253858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:48:07.253939       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:48:07.254089       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:07.254299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:07.254407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:48:07.254763       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:48:07.256625       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:48:07.257805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:48:07.257988       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:48:07.258811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:48:07.259413       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:48:07.261132       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.091658       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:48:08.147159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:48:08.202089       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:48:08.257172       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:48:08.318956       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.416964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:48:08.426635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:48:08.429682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.498271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.623065       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:48:08.623400       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.652497       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0813 20:48:11.848968       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:47:17 UTC, end at Fri 2021-08-13 20:49:17 UTC. --
	Aug 13 20:48:37 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:37.311628    2809 scope.go:111] "RemoveContainer" containerID="09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92"
	Aug 13 20:48:37 pause-20210813204600-30853 kubelet[2809]: E0813 20:48:37.324554    2809 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92\": container with ID starting with 09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92 not found: ID does not exist" containerID="09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92"
	Aug 13 20:48:37 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:37.324683    2809 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92} err="failed to get container status \"09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92\": rpc error: code = NotFound desc = could not find container \"09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92\": container with ID starting with 09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92 not found: ID does not exist"
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: W0813 20:48:38.042626    2809 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/21759cc2-1fdb-417f-bc71-01fb6f9d0c35/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.043002    2809 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-config-volume" (OuterVolumeSpecName: "config-volume") pod "21759cc2-1fdb-417f-bc71-01fb6f9d0c35" (UID: "21759cc2-1fdb-417f-bc71-01fb6f9d0c35"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.043446    2809 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-config-volume\") pod \"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\" (UID: \"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\") "
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.043655    2809 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wdzg\" (UniqueName: \"kubernetes.io/projected/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-kube-api-access-2wdzg\") pod \"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\" (UID: \"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\") "
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.044383    2809 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-config-volume\") on node \"pause-20210813204600-30853\" DevicePath \"\""
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.054821    2809 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-kube-api-access-2wdzg" (OuterVolumeSpecName: "kube-api-access-2wdzg") pod "21759cc2-1fdb-417f-bc71-01fb6f9d0c35" (UID: "21759cc2-1fdb-417f-bc71-01fb6f9d0c35"). InnerVolumeSpecName "kube-api-access-2wdzg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.145496    2809 reconciler.go:319] "Volume detached for volume \"kube-api-access-2wdzg\" (UniqueName: \"kubernetes.io/projected/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-kube-api-access-2wdzg\") on node \"pause-20210813204600-30853\" DevicePath \"\""
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: W0813 20:49:07.659903    2809 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: W0813 20:49:07.660584    2809 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:07.734933    2809 remote_image.go:71] "ListImages with filter from image service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:07.735512    2809 kuberuntime_image.go:136] "Failed to list images" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:07.735759    2809 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: failed to get image stats: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:49:08 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:08.446980    2809 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Aug 13 20:49:08 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:08.447053    2809 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:49:08 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:08.447095    2809 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:49:12 pause-20210813204600-30853 kubelet[2809]: I0813 20:49:12.083985    2809 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:49:12 pause-20210813204600-30853 kubelet[2809]: I0813 20:49:12.126901    2809 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76-tmp\") pod \"storage-provisioner\" (UID: \"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\") "
	Aug 13 20:49:12 pause-20210813204600-30853 kubelet[2809]: I0813 20:49:12.127447    2809 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2qn\" (UniqueName: \"kubernetes.io/projected/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76-kube-api-access-8s2qn\") pod \"storage-provisioner\" (UID: \"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\") "
	Aug 13 20:49:13 pause-20210813204600-30853 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:49:13 pause-20210813204600-30853 kubelet[2809]: I0813 20:49:13.808051    2809 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:49:13 pause-20210813204600-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:49:13 pause-20210813204600-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5] <==
	* I0813 20:49:13.139876       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:49:13.163404       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:49:13.163867       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:49:13.184473       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:49:13.184758       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c!
	I0813 20:49:13.194291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e31d828-490b-41db-8431-f66bfdb15cd4", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c became leader
	I0813 20:49:13.286143       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813204600-30853 -n pause-20210813204600-30853
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813204600-30853 -n pause-20210813204600-30853: exit status 2 (307.96943ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210813204600-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210813204600-30853 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210813204600-30853 describe pod : exit status 1 (55.733837ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210813204600-30853 describe pod : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813204600-30853 -n pause-20210813204600-30853
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813204600-30853 -n pause-20210813204600-30853: exit status 2 (264.600617ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813204600-30853 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p pause-20210813204600-30853 logs -n 25: (1.232802655s)
helpers_test.go:253: TestPause/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                      | multinode-20210813202419-30853-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:17 UTC | Fri, 13 Aug 2021 20:39:13 UTC |
	|         | multinode-20210813202419-30853-m03      |                                         |         |         |                               |                               |
	|         | --driver=kvm2                           |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| delete  | -p                                      | multinode-20210813202419-30853-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:13 UTC | Fri, 13 Aug 2021 20:39:14 UTC |
	|         | multinode-20210813202419-30853-m03      |                                         |         |         |                               |                               |
	| -p      | multinode-20210813202419-30853          | multinode-20210813202419-30853          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:14 UTC | Fri, 13 Aug 2021 20:39:16 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| delete  | -p                                      | multinode-20210813202419-30853          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:16 UTC | Fri, 13 Aug 2021 20:39:18 UTC |
	|         | multinode-20210813202419-30853          |                                         |         |         |                               |                               |
	| start   | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:02 UTC | Fri, 13 Aug 2021 20:43:38 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | --wait=true --preload=false             |                                         |         |         |                               |                               |
	|         | --driver=kvm2                           |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0            |                                         |         |         |                               |                               |
	| ssh     | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:38 UTC | Fri, 13 Aug 2021 20:43:41 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | -- sudo crictl pull busybox             |                                         |         |         |                               |                               |
	| start   | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:41 UTC | Fri, 13 Aug 2021 20:44:22 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2          |                                         |         |         |                               |                               |
	|         |  --container-runtime=crio               |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3            |                                         |         |         |                               |                               |
	| ssh     | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:22 UTC | Fri, 13 Aug 2021 20:44:22 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | -- sudo crictl image ls                 |                                         |         |         |                               |                               |
	| -p      | test-preload-20210813204102-30853       | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:22 UTC | Fri, 13 Aug 2021 20:44:24 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| delete  | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:25 UTC | Fri, 13 Aug 2021 20:44:26 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	| start   | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:26 UTC | Fri, 13 Aug 2021 20:45:21 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2             |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| stop    | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:21 UTC | Fri, 13 Aug 2021 20:45:21 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --cancel-scheduled                      |                                         |         |         |                               |                               |
	| stop    | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:34 UTC | Fri, 13 Aug 2021 20:45:42 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --schedule 5s                           |                                         |         |         |                               |                               |
	| delete  | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:59 UTC | Fri, 13 Aug 2021 20:46:00 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	| start   | -p                                      | force-systemd-env-20210813204600-30853  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:47:02 UTC |
	|         | force-systemd-env-20210813204600-30853  |                                         |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | -v=5 --driver=kvm2                      |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| delete  | -p                                      | force-systemd-env-20210813204600-30853  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:02 UTC | Fri, 13 Aug 2021 20:47:03 UTC |
	|         | force-systemd-env-20210813204600-30853  |                                         |         |         |                               |                               |
	| delete  | -p                                      | kubenet-20210813204703-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:03 UTC | Fri, 13 Aug 2021 20:47:03 UTC |
	|         | kubenet-20210813204703-30853            |                                         |         |         |                               |                               |
	| delete  | -p false-20210813204703-30853           | false-20210813204703-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:04 UTC | Fri, 13 Aug 2021 20:47:04 UTC |
	| start   | -p                                      | kubernetes-upgrade-20210813204600-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:47:42 UTC |
	|         | kubernetes-upgrade-20210813204600-30853 |                                         |         |         |                               |                               |
	|         | --memory=2200                           |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0            |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2    |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| stop    | -p                                      | kubernetes-upgrade-20210813204600-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:42 UTC | Fri, 13 Aug 2021 20:47:44 UTC |
	|         | kubernetes-upgrade-20210813204600-30853 |                                         |         |         |                               |                               |
	| start   | -p                                      | offline-crio-20210813204600-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:48:55 UTC |
	|         | offline-crio-20210813204600-30853       |                                         |         |         |                               |                               |
	|         | --alsologtostderr                       |                                         |         |         |                               |                               |
	|         | -v=1 --memory=2048                      |                                         |         |         |                               |                               |
	|         | --wait=true --driver=kvm2               |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| delete  | -p                                      | offline-crio-20210813204600-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:55 UTC | Fri, 13 Aug 2021 20:48:57 UTC |
	|         | offline-crio-20210813204600-30853       |                                         |         |         |                               |                               |
	| start   | -p pause-20210813204600-30853           | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:49:06 UTC |
	|         | --memory=2048                           |                                         |         |         |                               |                               |
	|         | --install-addons=false                  |                                         |         |         |                               |                               |
	|         | --wait=all --driver=kvm2                |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| start   | -p pause-20210813204600-30853           | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:06 UTC | Fri, 13 Aug 2021 20:49:13 UTC |
	|         | --alsologtostderr                       |                                         |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                      |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| -p      | pause-20210813204600-30853              | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:16 UTC | Fri, 13 Aug 2021 20:49:17 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:49:06
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:49:06.750460    3412 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:49:06.750532    3412 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:06.750535    3412 out.go:311] Setting ErrFile to fd 2...
	I0813 20:49:06.750538    3412 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:06.750645    3412 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:49:06.750968    3412 out.go:305] Setting JSON to false
	I0813 20:49:06.794979    3412 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":9109,"bootTime":1628878638,"procs":188,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:49:06.795299    3412 start.go:121] virtualization: kvm guest
	I0813 20:49:06.798215    3412 out.go:177] * [pause-20210813204600-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:49:06.799922    3412 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:49:06.798386    3412 notify.go:169] Checking for updates...
	I0813 20:49:06.801691    3412 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:49:06.803336    3412 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:49:06.804849    3412 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:49:06.805220    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:06.805637    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.805697    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.817202    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35163
	I0813 20:49:06.817597    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.818173    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.818195    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.818649    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.818887    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.819077    3412 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:49:06.819425    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.819465    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.830844    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0813 20:49:06.831324    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.831848    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.831871    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.832233    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.832415    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.865593    3412 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 20:49:06.865627    3412 start.go:278] selected driver: kvm2
	I0813 20:49:06.865641    3412 start.go:751] validating driver "kvm2" against &{Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:06.865757    3412 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:49:06.866497    3412 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:49:06.866703    3412 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:49:06.878129    3412 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:49:06.878764    3412 cni.go:93] Creating CNI manager for ""
	I0813 20:49:06.878779    3412 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:49:06.878789    3412 start_flags.go:277] config:
	{Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:06.878936    3412 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:49:06.881128    3412 out.go:177] * Starting control plane node pause-20210813204600-30853 in cluster pause-20210813204600-30853
	I0813 20:49:06.881153    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:06.881197    3412 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:49:06.881216    3412 cache.go:56] Caching tarball of preloaded images
	I0813 20:49:06.881339    3412 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:49:06.881361    3412 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:49:06.881476    3412 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/config.json ...
	I0813 20:49:06.881656    3412 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:49:06.881687    3412 start.go:313] acquiring machines lock for pause-20210813204600-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:49:06.881775    3412 start.go:317] acquired machines lock for "pause-20210813204600-30853" in 71.324µs
	I0813 20:49:06.881794    3412 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:49:06.881801    3412 fix.go:55] fixHost starting: 
	I0813 20:49:06.882135    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.882177    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.894411    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0813 20:49:06.894958    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.895630    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.895652    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.896024    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.896206    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.896395    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:06.899827    3412 fix.go:108] recreateIfNeeded on pause-20210813204600-30853: state=Running err=<nil>
	W0813 20:49:06.899844    3412 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:49:05.079802    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:06.902070    3412 out.go:177] * Updating the running kvm2 "pause-20210813204600-30853" VM ...
	I0813 20:49:06.902100    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.902283    3412 machine.go:88] provisioning docker machine ...
	I0813 20:49:06.902305    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.902430    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:06.902571    3412 buildroot.go:166] provisioning hostname "pause-20210813204600-30853"
	I0813 20:49:06.902599    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:06.902737    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:06.908023    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:06.908395    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:06.908431    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:06.908509    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:06.908703    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:06.908861    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:06.908990    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:06.909175    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:06.909381    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:06.909399    3412 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210813204600-30853 && echo "pause-20210813204600-30853" | sudo tee /etc/hostname
	I0813 20:49:07.062168    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210813204600-30853
	
	I0813 20:49:07.062210    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.068189    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.068544    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.068577    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.068759    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.068953    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.069117    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.069259    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.069439    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:07.069612    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:07.069649    3412 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210813204600-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210813204600-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210813204600-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:49:07.221530    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:49:07.221612    3412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:49:07.221648    3412 buildroot.go:174] setting up certificates
	I0813 20:49:07.221660    3412 provision.go:83] configureAuth start
	I0813 20:49:07.221672    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:07.221918    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:07.227471    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.227839    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.227868    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.228085    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.232869    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.233213    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.233251    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.233347    3412 provision.go:138] copyHostCerts
	I0813 20:49:07.233436    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:49:07.233450    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:49:07.233511    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:49:07.233650    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:49:07.233667    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:49:07.233695    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:49:07.233774    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:49:07.233784    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:49:07.233812    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:49:07.233859    3412 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.pause-20210813204600-30853 san=[192.168.39.61 192.168.39.61 localhost 127.0.0.1 minikube pause-20210813204600-30853]
	I0813 20:49:07.320299    3412 provision.go:172] copyRemoteCerts
	I0813 20:49:07.320390    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:49:07.320428    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.325783    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.326112    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.326152    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.326310    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.326478    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.326610    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.326733    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:07.427180    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:49:07.450672    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0813 20:49:07.471272    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:49:07.489660    3412 provision.go:86] duration metric: configureAuth took 267.984336ms
	I0813 20:49:07.489686    3412 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:49:07.489862    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:07.489982    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.495300    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.495618    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.495653    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.495797    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.495985    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.496150    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.496279    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.496434    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:07.496609    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:07.496631    3412 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:49:08.602797    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:49:08.602830    3412 machine.go:91] provisioned docker machine in 1.700528876s
	I0813 20:49:08.602841    3412 start.go:267] post-start starting for "pause-20210813204600-30853" (driver="kvm2")
	I0813 20:49:08.602846    3412 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:49:08.602880    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.603196    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:49:08.603247    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.608420    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.608704    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.608735    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.608875    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.609064    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.609198    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.609343    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:08.709733    3412 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:49:08.715709    3412 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:49:08.715731    3412 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:49:08.715792    3412 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:49:08.715871    3412 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 20:49:08.715956    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:49:08.724293    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:49:08.750217    3412 start.go:270] post-start completed in 147.362269ms
	I0813 20:49:08.750260    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.750492    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.756215    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.756621    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.756650    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.756812    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.757034    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.757170    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.757300    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.757480    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:08.757670    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:08.757683    3412 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 20:49:08.900897    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628887748.901369788
	
	I0813 20:49:08.900932    3412 fix.go:212] guest clock: 1628887748.901369788
	I0813 20:49:08.900944    3412 fix.go:225] Guest: 2021-08-13 20:49:08.901369788 +0000 UTC Remote: 2021-08-13 20:49:08.750472863 +0000 UTC m=+2.052052145 (delta=150.896925ms)
	I0813 20:49:08.900988    3412 fix.go:196] guest clock delta is within tolerance: 150.896925ms
	I0813 20:49:08.900996    3412 fix.go:57] fixHost completed within 2.019194265s
	I0813 20:49:08.901002    3412 start.go:80] releasing machines lock for "pause-20210813204600-30853", held for 2.019216553s
	I0813 20:49:08.901046    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.901309    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:08.906817    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.907191    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.907257    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.907379    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.907574    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.908140    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.908391    3412 ssh_runner.go:149] Run: systemctl --version
	I0813 20:49:08.908418    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.908488    3412 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:49:08.908539    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.915229    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.915547    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.915580    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.915727    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.915920    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.916011    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.916080    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.916237    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:08.916429    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.916461    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.916636    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.916784    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.917107    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.917257    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:09.014176    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:09.014353    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:09.061257    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:09.061287    3412 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:49:09.061352    3412 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:49:09.075880    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:49:09.086949    3412 docker.go:153] disabling docker service ...
	I0813 20:49:09.087012    3412 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:49:09.103245    3412 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:49:09.117178    3412 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:49:09.373507    3412 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:49:09.585738    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:49:09.599794    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:49:09.615240    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:49:09.623727    3412 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:49:09.630919    3412 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:49:09.637747    3412 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:49:09.808564    3412 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:49:09.952030    3412 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:49:09.952144    3412 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:49:09.959400    3412 start.go:413] Will wait 60s for crictl version
	I0813 20:49:09.959452    3412 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:49:09.991124    3412 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 20:49:09.991251    3412 ssh_runner.go:149] Run: crio --version
	I0813 20:49:10.280528    3412 ssh_runner.go:149] Run: crio --version
	I0813 20:49:10.528655    3412 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 20:49:10.528694    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:10.534359    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:10.534782    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:10.534815    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:10.535076    3412 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 20:49:10.539953    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:10.540017    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:10.583397    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:10.583419    3412 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:49:10.583459    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:10.620617    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:10.620642    3412 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:49:10.620703    3412 ssh_runner.go:149] Run: crio config
	I0813 20:49:10.896405    3412 cni.go:93] Creating CNI manager for ""
	I0813 20:49:10.896427    3412 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:49:10.896436    3412 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:49:10.896448    3412 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.61 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210813204600-30853 NodeName:pause-20210813204600-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.61 CgroupDriver:systemd ClientCAFile:/var/lib/m
inikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:49:10.896629    3412 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "pause-20210813204600-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:49:10.896754    3412 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=pause-20210813204600-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.61 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:49:10.896819    3412 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:49:10.911638    3412 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:49:10.911723    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:49:10.920269    3412 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (506 bytes)
	I0813 20:49:10.933623    3412 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:49:10.945877    3412 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I0813 20:49:10.958716    3412 ssh_runner.go:149] Run: grep 192.168.39.61	control-plane.minikube.internal$ /etc/hosts
	I0813 20:49:10.962845    3412 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853 for IP: 192.168.39.61
	I0813 20:49:10.962912    3412 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:49:10.962936    3412 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:49:10.963041    3412 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.key
	I0813 20:49:10.963067    3412 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.key.e9ce627b
	I0813 20:49:10.963088    3412 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.key
	I0813 20:49:10.963223    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 20:49:10.963274    3412 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 20:49:10.963290    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:49:10.963332    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:49:10.963362    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:49:10.963395    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:49:10.963481    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:49:10.964763    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:49:10.996208    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:49:11.015193    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:49:11.032382    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:49:11.050461    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:49:11.067415    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:49:11.085267    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:49:11.102588    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:49:11.128113    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:49:11.146008    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 20:49:11.162723    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 20:49:11.181637    3412 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:49:11.193799    3412 ssh_runner.go:149] Run: openssl version
	I0813 20:49:11.199783    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 20:49:11.209928    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.214459    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.214508    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.221207    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:49:11.229476    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:49:11.237550    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.245454    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.245501    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.251754    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:49:11.258461    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 20:49:11.267146    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.271736    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.271779    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.278000    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 20:49:11.284415    3412 kubeadm.go:390] StartCluster: {Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clu
sterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:11.284518    3412 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:49:11.284561    3412 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:49:11.324305    3412 cri.go:76] found id: "d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6"
	I0813 20:49:11.324324    3412 cri.go:76] found id: "2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164"
	I0813 20:49:11.324329    3412 cri.go:76] found id: "ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf"
	I0813 20:49:11.324336    3412 cri.go:76] found id: "66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf"
	I0813 20:49:11.324339    3412 cri.go:76] found id: "83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659"
	I0813 20:49:11.324343    3412 cri.go:76] found id: "82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b"
	I0813 20:49:11.324347    3412 cri.go:76] found id: ""
	I0813 20:49:11.324383    3412 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:49:11.370394    3412 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","pid":3260,"status":"running","bundle":"/run/containers/storage/overlay-containers/2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164/userdata","rootfs":"/var/lib/containers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","created":"2021-08-13T20:48:25.650799846Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7bfe6d1f","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7bfe6d1f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termination
MessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.433420822Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/c
ontainers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet
/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/containers/kube-proxy/b214a802\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~projected/kube-api-access-qrwsr\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.prop
erty.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","pid":2560,"status":"running","bundle":"/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata","rootfs":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","created":"2021-08-13T20:47:58.451921584Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170563888Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podace00bb4fb8a8a9569ff7dae47e01d30.slice","io.kubernetes.cri-o.ContainerID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.Contai
nerName":"k8s_POD_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.734913609Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30
853_ace00bb4fb8a8a9569ff7dae47e01d30/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210813204600-30853\",\"uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c3
9b238025a67ffbc7ea","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","pid":3063,"status":"running","bundle":"/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata","rootfs":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe9
37ca4df37/merged","created":"2021-08-13T20:48:24.164151322Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.030706859Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod7c8a1bad_1f97_44ad_a3e3_fb9d52cfd0d9.slice","io.kubernetes.cri-o.ContainerID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.009794742Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/hostname","i
o.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-4n8kb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"7cdcb64568\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-4n8kb\",\"uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe937ca4df37/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes
.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/shm","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactiv
e-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","pid":2712,"status":"running","bundle":"/run/containers/storage/overlay-containers/66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf/userdata","rootfs":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","created":"2021-08-13T20:48:00.371988051Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.05184871Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30853_ace00bb4fb8a8a9569ff7dae47e01d30/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"ku
be-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/etc-hosts\",\"readonly\":false},{\"cont
ainer_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/containers/kube-scheduler/1a90a935\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","pid":2531,"status":"running","bundle":"/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af7
8ce2fb71d82b52d87fa45aaf3/userdata","rootfs":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","created":"2021-08-13T20:47:58.134632094Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b586eaff819d4c98a938914befbf359d\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170560054Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podb586eaff819d4c98a938914befbf359d.slice","io.kubernetes.cri-o.ContainerID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.58849323Z","io.kubernetes.cri-o.HostName":"pause-20210
813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210813204600-30853\",\"uid\":\"b586eaff81
9d4c98a938914befbf359d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d8
2b52d87fa45aaf3/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","pid":3202,"status":"running","bundle":"/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata","rootfs":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","created":"2021-08-13T20:48:25.02088557Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/co
nfig.seen\":\"2021-08-13T20:48:23.684666458Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth769c0295\",\"mac\":\"0e:7f:8d:fd:2a:c5\"},{\"name\":\"eth0\",\"mac\":\"46:39:40:9e:ad:d7\",\"sandbox\":\"/var/run/netns/70e99836-e661-4e4f-bfb4-1e8d94b25ad2\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod72033717_35d7_4397_b3c5_28028e7270f3.slice","io.kubernetes.cri-o.ContainerID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.356545063Z","io.kubernetes.cri-o.H
ostName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grvm\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-4grvm\",\"uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.M
ountPoint":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"k
ube-system","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","pid":2593,"status":"running","bundle":"/run/containers/storage/overlay-containers/82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b/userdata","rootfs":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","created":"2021-08-13T20:47:59.106710832Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"46519583","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"Fi
le","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"46519583\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:58.700311118Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.ui
d\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kube
rnetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/containers/kube-apiserver/d05226bf\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.
61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","pid":2654,"status":"running","bundle":"/run/containers/storage/overlay-containers/83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659/userdata","rootfs":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","created":"2021-08-13T20:47:59.879440634Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dfe11a","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePol
icy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dfe11a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:59.302380713Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kub
e-system\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","
io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/containers/kube-controller-manager/3fd07eff\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/v
olume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata","rootfs":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","created":"2021-08-13T20:
48:24.985669139Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.664842879Z\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth8015c076\",\"mac\":\"b6:65:b6:ec:41:c5\"},{\"name\":\"eth0\",\"mac\":\"e2:c2:94:2c:86:54\",\"sandbox\":\"/var/run/netns/18863c2e-48ba-4850-8146-8e155524b6dd\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.3/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod21759cc2_1fdb_417f_bc71_01fb6f9d0c35.slice","io.kubernetes.cri-o.ContainerID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-41
7f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.319998358Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-gm2bv\",\"k8s-app\":\"kube-dns\",\"pod-template-hash\":\"558bd4d5db\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-gm2bv_21759cc2-1fdb-417f-bc71-01fb6f9d0c35/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540.log","io.kubernetes.cri-
o.Metadata":"{\"name\":\"coredns-558bd4d5db-gm2bv\",\"uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-417f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9f35d968
848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-gm2bv","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"21759cc2-1fdb-417f-bc71-01fb6f9d0c35","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.664842879Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf","pid":2754,"status":"running","bundle":"/run/containers/storage/overlay-containers/ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf/userdata","rootfs":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","created":"2021-08-13T20:48:00.893103098Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5d26fc81","io.kubernetes.container.name":"etcd","io.kubernetes.container
.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5d26fc81\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.424653769Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.p
od.name\":\"etcd-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.SeccompProf
ilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/containers/etcd/7df814d9\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d2
2eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","pid":3295,"status":"running","bundle":"/run/containers/storage/overlay-containers/d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6/userdata","rootfs":"/var/lib/containers/storage/overlay/6c5dd04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","created":"2021-08-13T20:48:25.853932123Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"861ab352","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.
kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"861ab352\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.56946163Z","io.kubernetes.cri-o.IP.0":"10
.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grvm\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6c5dd04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/
storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/containers/coredns/baf35c8d\",\"readonly\":false},{\"container_path\
":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~projected/kube-api-access-zsj85\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","pid":2552,"status":"running","bundle":"/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata","rootfs":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","created"
:"2021-08-13T20:47:58.569818878Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.61:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170566946Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod545d21e989d5ed3752d22eeb8bd8ffce.slice","io.kubernetes.cri-o.ContainerID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.638411495Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/sto
rage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"etcd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210813204600-30853\",\"uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","io.kubernet
es.cri-o.Name":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d22eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","pid":2497,"status":"running","bundle":"/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata","rootfs":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","created":"2021-08-13T20:47:57.759478731Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170508472Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"cb76671b6b79a1d55244
9a94a3dbfa98\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.61:8443\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice","io.kubernetes.cri-o.ContainerID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.128395566Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",
\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210813204600-30853\",\"uid\":\"cb76671b6b79a1d552449a94a3dbfa98\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]",
"io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode"
:"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0813 20:49:11.370977    3412 cri.go:113] list returned 13 containers
	I0813 20:49:11.370992    3412 cri.go:116] container: {ID:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 Status:running}
	I0813 20:49:11.371004    3412 cri.go:122] skipping {2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 running}: state = "running", want "paused"
	I0813 20:49:11.371014    3412 cri.go:116] container: {ID:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea Status:running}
	I0813 20:49:11.371019    3412 cri.go:118] skipping 55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea - not in ps
	I0813 20:49:11.371023    3412 cri.go:116] container: {ID:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 Status:running}
	I0813 20:49:11.371028    3412 cri.go:118] skipping 564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 - not in ps
	I0813 20:49:11.371034    3412 cri.go:116] container: {ID:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf Status:running}
	I0813 20:49:11.371040    3412 cri.go:122] skipping {66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf running}: state = "running", want "paused"
	I0813 20:49:11.371048    3412 cri.go:116] container: {ID:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 Status:running}
	I0813 20:49:11.371054    3412 cri.go:118] skipping 6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 - not in ps
	I0813 20:49:11.371063    3412 cri.go:116] container: {ID:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 Status:running}
	I0813 20:49:11.371069    3412 cri.go:118] skipping 8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 - not in ps
	I0813 20:49:11.371076    3412 cri.go:116] container: {ID:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b Status:running}
	I0813 20:49:11.371081    3412 cri.go:122] skipping {82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b running}: state = "running", want "paused"
	I0813 20:49:11.371087    3412 cri.go:116] container: {ID:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659 Status:running}
	I0813 20:49:11.371091    3412 cri.go:122] skipping {83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659 running}: state = "running", want "paused"
	I0813 20:49:11.371099    3412 cri.go:116] container: {ID:9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 Status:stopped}
	I0813 20:49:11.371105    3412 cri.go:118] skipping 9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 - not in ps
	I0813 20:49:11.371110    3412 cri.go:116] container: {ID:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf Status:running}
	I0813 20:49:11.371115    3412 cri.go:122] skipping {ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf running}: state = "running", want "paused"
	I0813 20:49:11.371119    3412 cri.go:116] container: {ID:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6 Status:running}
	I0813 20:49:11.371127    3412 cri.go:122] skipping {d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6 running}: state = "running", want "paused"
	I0813 20:49:11.371135    3412 cri.go:116] container: {ID:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f Status:running}
	I0813 20:49:11.371144    3412 cri.go:118] skipping e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f - not in ps
	I0813 20:49:11.371154    3412 cri.go:116] container: {ID:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 Status:running}
	I0813 20:49:11.371164    3412 cri.go:118] skipping f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 - not in ps
	I0813 20:49:11.371203    3412 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:49:11.379585    3412 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:49:11.379610    3412 kubeadm.go:600] restartCluster start
	I0813 20:49:11.379656    3412 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:49:11.387273    3412 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:49:11.388131    3412 kubeconfig.go:93] found "pause-20210813204600-30853" server: "https://192.168.39.61:8443"
	I0813 20:49:11.389906    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.391540    3412 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:49:11.398645    3412 api_server.go:164] Checking apiserver status ...
	I0813 20:49:11.398727    3412 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:49:11.410339    3412 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2593/cgroup
	I0813 20:49:11.416825    3412 api_server.go:180] apiserver freezer: "11:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice/crio-82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b.scope"
	I0813 20:49:11.416874    3412 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice/crio-82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b.scope/freezer.state
	I0813 20:49:11.424153    3412 api_server.go:202] freezer state: "THAWED"
	I0813 20:49:11.424172    3412 api_server.go:239] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0813 20:49:11.430386    3412 api_server.go:265] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0813 20:49:11.447400    3412 system_pods.go:86] 6 kube-system pods found
	I0813 20:49:11.447439    3412 system_pods.go:89] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:11.447446    3412 system_pods.go:89] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:11.447453    3412 system_pods.go:89] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:11.447457    3412 system_pods.go:89] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:11.447460    3412 system_pods.go:89] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:11.447465    3412 system_pods.go:89] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:11.448566    3412 api_server.go:139] control plane version: v1.21.3
	I0813 20:49:11.448586    3412 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.39.61
	I0813 20:49:11.448597    3412 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0813 20:49:11.448603    3412 kubeadm.go:604] restartCluster took 68.987456ms
	I0813 20:49:11.448610    3412 kubeadm.go:392] StartCluster complete in 164.201481ms
	I0813 20:49:11.448627    3412 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:49:11.448743    3412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:49:11.449587    3412 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:49:11.450509    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.454641    3412 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210813204600-30853" rescaled to 1
	I0813 20:49:11.454698    3412 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:49:11.454707    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:49:11.456952    3412 out.go:177] * Verifying Kubernetes components...
	I0813 20:49:11.457008    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:11.454754    3412 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:49:11.457069    3412 addons.go:59] Setting storage-provisioner=true in profile "pause-20210813204600-30853"
	I0813 20:49:11.455000    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:11.457090    3412 addons.go:135] Setting addon storage-provisioner=true in "pause-20210813204600-30853"
	W0813 20:49:11.457098    3412 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:49:11.457112    3412 addons.go:59] Setting default-storageclass=true in profile "pause-20210813204600-30853"
	I0813 20:49:11.457130    3412 host.go:66] Checking if "pause-20210813204600-30853" exists ...
	I0813 20:49:11.457136    3412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210813204600-30853"
	I0813 20:49:11.457449    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.457490    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.457642    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.457688    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.468728    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0813 20:49:11.469146    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.469685    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.469705    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.470063    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.470584    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.470626    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.476732    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0813 20:49:11.477171    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.477677    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.477701    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.478079    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.478277    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.482479    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.483740    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45299
	I0813 20:49:11.484114    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.484536    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.484555    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.484941    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.485097    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.487884    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:11.490267    3412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:49:11.488882    3412 addons.go:135] Setting addon default-storageclass=true in "pause-20210813204600-30853"
	W0813 20:49:11.490289    3412 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:49:11.490323    3412 host.go:66] Checking if "pause-20210813204600-30853" exists ...
	I0813 20:49:11.490374    3412 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:49:11.490389    3412 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:49:11.490406    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:11.490689    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.490728    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.496655    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.497065    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:11.497093    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.497244    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:11.497423    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:11.497618    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:11.497767    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:11.503422    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34471
	I0813 20:49:11.503821    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.504277    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.504306    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.504582    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.505173    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.505219    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.518799    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36859
	I0813 20:49:11.519214    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.519629    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.519655    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.519995    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.520180    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.523435    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:11.523650    3412 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:49:11.523666    3412 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:49:11.523682    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:11.529028    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.529396    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:11.529423    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.529571    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:11.529736    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:11.529865    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:11.530004    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:11.605965    3412 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:49:11.606090    3412 node_ready.go:35] waiting up to 6m0s for node "pause-20210813204600-30853" to be "Ready" ...
	I0813 20:49:11.610421    3412 node_ready.go:49] node "pause-20210813204600-30853" has status "Ready":"True"
	I0813 20:49:11.610442    3412 node_ready.go:38] duration metric: took 4.320432ms waiting for node "pause-20210813204600-30853" to be "Ready" ...
	I0813 20:49:11.610453    3412 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:49:11.616546    3412 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:49:11.616740    3412 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.631733    3412 pod_ready.go:92] pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.631757    3412 pod_ready.go:81] duration metric: took 14.992576ms waiting for pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.631771    3412 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.639091    3412 pod_ready.go:92] pod "etcd-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.639117    3412 pod_ready.go:81] duration metric: took 7.33748ms waiting for pod "etcd-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.639129    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.645487    3412 pod_ready.go:92] pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.645508    3412 pod_ready.go:81] duration metric: took 6.370538ms waiting for pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.645519    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.652583    3412 pod_ready.go:92] pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.652602    3412 pod_ready.go:81] duration metric: took 7.073719ms waiting for pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.652614    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4n8kb" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.658710    3412 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:49:12.038755    3412 pod_ready.go:92] pod "kube-proxy-4n8kb" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:12.038776    3412 pod_ready.go:81] duration metric: took 386.155583ms waiting for pod "kube-proxy-4n8kb" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.038787    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.069005    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069032    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069056    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069036    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069332    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069333    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069336    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069348    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069357    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069364    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069368    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069371    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069377    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069380    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069631    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069649    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069664    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069635    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069693    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069706    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069717    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069914    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069931    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.071889    3412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:49:12.071910    3412 addons.go:344] enableAddons completed in 617.161828ms
	I0813 20:49:12.434704    3412 pod_ready.go:92] pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:12.434726    3412 pod_ready.go:81] duration metric: took 395.931948ms waiting for pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.434734    3412 pod_ready.go:38] duration metric: took 824.269103ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:49:12.434752    3412 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:49:12.434790    3412 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:49:12.451457    3412 api_server.go:70] duration metric: took 996.725767ms to wait for apiserver process to appear ...
	I0813 20:49:12.451487    3412 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:49:12.451500    3412 api_server.go:239] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0813 20:49:12.457776    3412 api_server.go:265] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0813 20:49:12.458697    3412 api_server.go:139] control plane version: v1.21.3
	I0813 20:49:12.458716    3412 api_server.go:129] duration metric: took 7.221294ms to wait for apiserver health ...
	I0813 20:49:12.458726    3412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:49:12.637203    3412 system_pods.go:59] 7 kube-system pods found
	I0813 20:49:12.637240    3412 system_pods.go:61] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:12.637248    3412 system_pods.go:61] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:12.637254    3412 system_pods.go:61] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:12.637261    3412 system_pods.go:61] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:12.637266    3412 system_pods.go:61] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:12.637272    3412 system_pods.go:61] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:12.637281    3412 system_pods.go:61] "storage-provisioner" [aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:49:12.637290    3412 system_pods.go:74] duration metric: took 178.557519ms to wait for pod list to return data ...
	I0813 20:49:12.637299    3412 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:49:12.841324    3412 default_sa.go:45] found service account: "default"
	I0813 20:49:12.841350    3412 default_sa.go:55] duration metric: took 204.040505ms for default service account to be created ...
	I0813 20:49:12.841359    3412 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:49:13.042158    3412 system_pods.go:86] 7 kube-system pods found
	I0813 20:49:13.042205    3412 system_pods.go:89] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:13.042216    3412 system_pods.go:89] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:13.042224    3412 system_pods.go:89] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:13.042237    3412 system_pods.go:89] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:13.042245    3412 system_pods.go:89] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:13.042257    3412 system_pods.go:89] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:13.042278    3412 system_pods.go:89] "storage-provisioner" [aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:49:13.042295    3412 system_pods.go:126] duration metric: took 200.930278ms to wait for k8s-apps to be running ...
	I0813 20:49:13.042313    3412 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:49:13.042369    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:13.056816    3412 system_svc.go:56] duration metric: took 14.491659ms WaitForService to wait for kubelet.
	I0813 20:49:13.056852    3412 kubeadm.go:547] duration metric: took 1.60212918s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:49:13.056882    3412 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:49:13.236184    3412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:49:13.236241    3412 node_conditions.go:123] node cpu capacity is 2
	I0813 20:49:13.236260    3412 node_conditions.go:105] duration metric: took 179.373183ms to run NodePressure ...
	I0813 20:49:13.236273    3412 start.go:231] waiting for startup goroutines ...
	I0813 20:49:13.296415    3412 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:49:13.298518    3412 out.go:177] * Done! kubectl is now configured to use "pause-20210813204600-30853" cluster and "default" namespace by default
	I0813 20:49:10.080830    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:49:10.579566    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:14.540519    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": read tcp 192.168.50.1:40792->192.168.50.24:8443: read: connection reset by peer
	I0813 20:49:14.579739    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:14.580451    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:47:17 UTC, end at Fri 2021-08-13 20:49:18 UTC. --
	Aug 13 20:49:17 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:17.312398125Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,StartedAt:1628887753062172592,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/containers/storage-provisioner/3a59d7be,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/volumes/kubernetes.io~projected/kube-api-access-8s2qn,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/storage-prov
isioner/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=14b8565a-ea1b-4ad9-8ec6-79f5ea74c9ba name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.493528900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cd4e874e-7bac-48d7-afdf-2139f33da115 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.493591106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cd4e874e-7bac-48d7-afdf-2139f33da115 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.493792235Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cd4e874e-7bac-48d7-afdf-2139f33da115 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.538070015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3516926c-0977-4ac7-bbec-585e2e2ebd45 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.538133380Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3516926c-0977-4ac7-bbec-585e2e2ebd45 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.538620449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3516926c-0977-4ac7-bbec-585e2e2ebd45 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.576866119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=12ea5a60-6f1d-4030-b9dc-8659c2b15341 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.576930664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=12ea5a60-6f1d-4030-b9dc-8659c2b15341 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.577096727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=12ea5a60-6f1d-4030-b9dc-8659c2b15341 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.615469253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a18a39f7-6789-42ae-acb1-c788142a1569 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.615617168Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a18a39f7-6789-42ae-acb1-c788142a1569 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.615788306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a18a39f7-6789-42ae-acb1-c788142a1569 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.654695181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eb83a634-9f18-49f3-946a-7a1af8c1c3e6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.654888261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eb83a634-9f18-49f3-946a-7a1af8c1c3e6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.655106703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eb83a634-9f18-49f3-946a-7a1af8c1c3e6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.697127250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bd41689f-eef3-4550-9b26-d8764500f141 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.697348109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bd41689f-eef3-4550-9b26-d8764500f141 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.697500989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bd41689f-eef3-4550-9b26-d8764500f141 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.731275783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ada7e856-ed62-47cb-8a06-d88a284a9997 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.731430516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ada7e856-ed62-47cb-8a06-d88a284a9997 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.731579053Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ada7e856-ed62-47cb-8a06-d88a284a9997 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.775843000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a83198a9-ca05-47f9-9979-7035a52c116a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.775903465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a83198a9-ca05-47f9-9979-7035a52c116a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:18 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:18.776054536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a83198a9-ca05-47f9-9979-7035a52c116a name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	10dab2af99578       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 seconds ago        Running             storage-provisioner       0                   2a6ab48b5042a
	d33287457e451       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   53 seconds ago       Running             coredns                   0                   8088cc5d3d38a
	2e50c328d7104       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   53 seconds ago       Running             kube-proxy                0                   564d5f18f75ed
	ac4bf726a8a57       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   About a minute ago   Running             etcd                      0                   e992003133001
	66655950d3afa       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   About a minute ago   Running             kube-scheduler            0                   55ddf08f50f8c
	83df9633ff352       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   About a minute ago   Running             kube-controller-manager   0                   6c56d5bf50b7a
	82d4de99d88e5       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   About a minute ago   Running             kube-apiserver            0                   f228ab759c26a
	
	* 
	* ==> coredns [d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	[INFO] Reloading complete
	I0813 20:48:56.155624       1 trace.go:205] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.152) (total time: 30002ms):
	Trace[1427131847]: [30.002619331s] [30.002619331s] END
	E0813 20:48:56.155739       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:48:56.155858       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.154) (total time: 30001ms):
	Trace[911902081]: [30.001733139s] [30.001733139s] END
	E0813 20:48:56.155865       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:48:56.155918       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.152) (total time: 30002ms):
	Trace[2019727887]: [30.002706635s] [30.002706635s] END
	E0813 20:48:56.156104       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210813204600-30853
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210813204600-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=pause-20210813204600-30853
	                    minikube.k8s.io/updated_at=2021_08_13T20_48_11_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:48:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210813204600-30853
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:49:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    pause-20210813204600-30853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	System Info:
	  Machine ID:                 07e647a52575478182b10082d1b9460a
	  System UUID:                07e647a5-2575-4781-82b1-0082d1b9460a
	  Boot ID:                    1c1f8243-ce7f-455c-a669-de6493424040
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-4grvm                              100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     56s
	  kube-system                 etcd-pause-20210813204600-30853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 kube-apiserver-pause-20210813204600-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-controller-manager-pause-20210813204600-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-proxy-4n8kb                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-scheduler-pause-20210813204600-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 storage-provisioner                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  85s (x6 over 85s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s (x5 over 85s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s (x5 over 85s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientPID
	  Normal  Starting                 63s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                61s                kubelet     Node pause-20210813204600-30853 status is now: NodeReady
	  Normal  Starting                 53s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	*               If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000025] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +5.165176] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.050992] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.137498] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1726 comm=systemd-network
	[  +1.376463] vboxguest: loading out-of-tree module taints kernel.
	[  +0.007022] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.624786] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +20.400328] systemd-fstab-generator[2162]: Ignoring "noauto" for root device
	[  +0.134832] systemd-fstab-generator[2175]: Ignoring "noauto" for root device
	[  +0.282454] systemd-fstab-generator[2201]: Ignoring "noauto" for root device
	[  +6.552961] systemd-fstab-generator[2405]: Ignoring "noauto" for root device
	[Aug13 20:48] systemd-fstab-generator[2800]: Ignoring "noauto" for root device
	[ +13.894926] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.479825] kauditd_printk_skb: 80 callbacks suppressed
	[Aug13 20:49] kauditd_printk_skb: 14 callbacks suppressed
	[  +4.187207] systemd-fstab-generator[4013]: Ignoring "noauto" for root device
	[  +0.260965] systemd-fstab-generator[4026]: Ignoring "noauto" for root device
	[  +0.242550] systemd-fstab-generator[4048]: Ignoring "noauto" for root device
	[  +3.941917] systemd-fstab-generator[4299]: Ignoring "noauto" for root device
	[  +0.801138] systemd-fstab-generator[4353]: Ignoring "noauto" for root device
	[  +1.042940] systemd-fstab-generator[4407]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf] <==
	* 2021-08-13 20:48:01.922733 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:48:01.952757 I | embed: serving client requests on 192.168.39.61:2379
	2021-08-13 20:48:01.954160 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:48:01.975055 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:48:12.629799 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (355.071918ms) to execute
	2021-08-13 20:48:18.621673 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" " with result "range_response_count:0 size:5" took too long (1.837036221s) to execute
	2021-08-13 20:48:18.622362 W | wal: sync duration of 1.607346013s, expected less than 1s
	2021-08-13 20:48:18.623060 W | etcdserver: request "header:<ID:12771218163585540132 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-20210813204600-30853.169af8bae7fa23bf\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-20210813204600-30853.169af8bae7fa23bf\" value_size:632 lease:3547846126730764118 >> failure:<>>" with result "size:16" took too long (1.606807479s) to execute
	2021-08-13 20:48:18.624926 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.461501725s) to execute
	2021-08-13 20:48:18.628021 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210813204600-30853\" " with result "range_response_count:1 size:3982" took too long (1.370325429s) to execute
	2021-08-13 20:48:21.346921 W | wal: sync duration of 1.299304523s, expected less than 1s
	2021-08-13 20:48:21.347401 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.068677828s) to execute
	2021-08-13 20:48:24.481477 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:26.500706 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813204600-30853\" " with result "range_response_count:1 size:6093" took too long (233.724165ms) to execute
	2021-08-13 20:48:26.501137 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-gm2bv\" " with result "range_response_count:1 size:4473" took too long (378.683681ms) to execute
	2021-08-13 20:48:26.502059 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-4grvm\" " with result "range_response_count:1 size:4461" took too long (270.883259ms) to execute
	2021-08-13 20:48:28.869625 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:38.868019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:48.868044 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:58.870803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:49:00.399177 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (1.157615469s) to execute
	2021-08-13 20:49:00.400612 W | etcdserver: request "header:<ID:12771218163585540646 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" mod_revision:468 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" value_size:584 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" > >>" with result "size:16" took too long (200.747119ms) to execute
	2021-08-13 20:49:00.400917 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813204600-30853\" " with result "range_response_count:1 size:6093" took too long (1.158534213s) to execute
	2021-08-13 20:49:00.401297 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (569.698ms) to execute
	2021-08-13 20:49:08.868736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:49:19 up 2 min,  0 users,  load average: 1.86, 0.77, 0.29
	Linux pause-20210813204600-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b] <==
	* Trace[1175388272]: [1.383273804s] [1.383273804s] END
	I0813 20:48:18.647776       1 trace.go:205] Trace[1480647024]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.255) (total time: 1391ms):
	Trace[1480647024]: ---"Object stored in database" 1379ms (20:48:00.638)
	Trace[1480647024]: [1.391864844s] [1.391864844s] END
	I0813 20:48:18.651341       1 trace.go:205] Trace[532588033]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.256) (total time: 1395ms):
	Trace[532588033]: [1.395160654s] [1.395160654s] END
	I0813 20:48:18.651913       1 trace.go:205] Trace[486245217]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.256) (total time: 1395ms):
	Trace[486245217]: [1.395849853s] [1.395849853s] END
	I0813 20:48:18.659173       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:48:21.348539       1 trace.go:205] Trace[264690694]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:20.278) (total time: 1070ms):
	Trace[264690694]: [1.070400996s] [1.070400996s] END
	I0813 20:48:22.995388       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:48:23.545730       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:48:37.713151       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:48:37.713388       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:48:37.713410       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:49:00.401993       1 trace.go:205] Trace[875370503]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:48:59.240) (total time: 1161ms):
	Trace[875370503]: ---"About to write a response" 1161ms (20:49:00.401)
	Trace[875370503]: [1.161749328s] [1.161749328s] END
	I0813 20:49:00.403705       1 trace.go:205] Trace[1375945297]: "Get" url:/api/v1/nodes/pause-20210813204600-30853,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.39.1,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:48:59.241) (total time: 1162ms):
	Trace[1375945297]: ---"About to write a response" 1161ms (20:49:00.403)
	Trace[1375945297]: [1.162052238s] [1.162052238s] END
	I0813 20:49:08.639766       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:49:08.639943       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:49:08.639963       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659] <==
	* I0813 20:48:22.670523       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0813 20:48:22.676047       1 shared_informer.go:247] Caches are synced for job 
	I0813 20:48:22.676648       1 shared_informer.go:247] Caches are synced for GC 
	I0813 20:48:22.680632       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0813 20:48:22.680827       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0813 20:48:22.713877       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0813 20:48:22.743162       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:48:22.743798       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:48:22.849717       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0813 20:48:22.888695       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:48:22.888733       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:48:22.923738       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:48:22.923844       1 disruption.go:371] Sending events to api server.
	I0813 20:48:22.939921       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:48:23.006118       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4n8kb"
	E0813 20:48:23.080425       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"4ec5a127-3b2a-4f66-8321-f0bab85709c0", ResourceVersion:"304", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764484491, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000abfda0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000abfdb8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc0014a9280), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00142b740), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000abf
dd0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000abfde8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014a92c0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001419440), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00144e5a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000843e30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00163c430)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00144e608)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0813 20:48:23.316478       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:48:23.352329       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:48:23.352427       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:48:23.554638       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:48:23.583893       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:48:23.645559       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-gm2bv"
	I0813 20:48:23.652683       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-4grvm"
	I0813 20:48:23.772425       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-gm2bv"
	
	* 
	* ==> kube-proxy [2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164] <==
	* I0813 20:48:26.523023       1 node.go:172] Successfully retrieved node IP: 192.168.39.61
	I0813 20:48:26.523578       1 server_others.go:140] Detected node IP 192.168.39.61
	W0813 20:48:26.523867       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:48:26.597173       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:48:26.597466       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:48:26.597629       1 server_others.go:212] Using iptables Proxier.
	I0813 20:48:26.599876       1 server.go:643] Version: v1.21.3
	I0813 20:48:26.601871       1 config.go:315] Starting service config controller
	I0813 20:48:26.601925       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:48:26.601964       1 config.go:224] Starting endpoint slice config controller
	I0813 20:48:26.601993       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:48:26.626937       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:48:26.631306       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:48:26.702322       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:48:26.702322       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf] <==
	* E0813 20:48:07.253858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:48:07.253939       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:48:07.254089       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:07.254299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:07.254407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:48:07.254763       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:48:07.256625       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:48:07.257805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:48:07.257988       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:48:07.258811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:48:07.259413       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:48:07.261132       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.091658       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:48:08.147159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:48:08.202089       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:48:08.257172       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:48:08.318956       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.416964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:48:08.426635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:48:08.429682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.498271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.623065       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:48:08.623400       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.652497       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0813 20:48:11.848968       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:47:17 UTC, end at Fri 2021-08-13 20:49:19 UTC. --
	Aug 13 20:48:37 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:37.311628    2809 scope.go:111] "RemoveContainer" containerID="09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92"
	Aug 13 20:48:37 pause-20210813204600-30853 kubelet[2809]: E0813 20:48:37.324554    2809 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92\": container with ID starting with 09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92 not found: ID does not exist" containerID="09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92"
	Aug 13 20:48:37 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:37.324683    2809 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92} err="failed to get container status \"09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92\": rpc error: code = NotFound desc = could not find container \"09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92\": container with ID starting with 09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92 not found: ID does not exist"
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: W0813 20:48:38.042626    2809 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/21759cc2-1fdb-417f-bc71-01fb6f9d0c35/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.043002    2809 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-config-volume" (OuterVolumeSpecName: "config-volume") pod "21759cc2-1fdb-417f-bc71-01fb6f9d0c35" (UID: "21759cc2-1fdb-417f-bc71-01fb6f9d0c35"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.043446    2809 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-config-volume\") pod \"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\" (UID: \"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\") "
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.043655    2809 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wdzg\" (UniqueName: \"kubernetes.io/projected/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-kube-api-access-2wdzg\") pod \"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\" (UID: \"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\") "
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.044383    2809 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-config-volume\") on node \"pause-20210813204600-30853\" DevicePath \"\""
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.054821    2809 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-kube-api-access-2wdzg" (OuterVolumeSpecName: "kube-api-access-2wdzg") pod "21759cc2-1fdb-417f-bc71-01fb6f9d0c35" (UID: "21759cc2-1fdb-417f-bc71-01fb6f9d0c35"). InnerVolumeSpecName "kube-api-access-2wdzg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.145496    2809 reconciler.go:319] "Volume detached for volume \"kube-api-access-2wdzg\" (UniqueName: \"kubernetes.io/projected/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-kube-api-access-2wdzg\") on node \"pause-20210813204600-30853\" DevicePath \"\""
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: W0813 20:49:07.659903    2809 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: W0813 20:49:07.660584    2809 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:07.734933    2809 remote_image.go:71] "ListImages with filter from image service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:07.735512    2809 kuberuntime_image.go:136] "Failed to list images" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:07.735759    2809 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: failed to get image stats: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:49:08 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:08.446980    2809 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Aug 13 20:49:08 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:08.447053    2809 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:49:08 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:08.447095    2809 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:49:12 pause-20210813204600-30853 kubelet[2809]: I0813 20:49:12.083985    2809 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:49:12 pause-20210813204600-30853 kubelet[2809]: I0813 20:49:12.126901    2809 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76-tmp\") pod \"storage-provisioner\" (UID: \"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\") "
	Aug 13 20:49:12 pause-20210813204600-30853 kubelet[2809]: I0813 20:49:12.127447    2809 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2qn\" (UniqueName: \"kubernetes.io/projected/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76-kube-api-access-8s2qn\") pod \"storage-provisioner\" (UID: \"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\") "
	Aug 13 20:49:13 pause-20210813204600-30853 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:49:13 pause-20210813204600-30853 kubelet[2809]: I0813 20:49:13.808051    2809 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:49:13 pause-20210813204600-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:49:13 pause-20210813204600-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5] <==
	* I0813 20:49:13.139876       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:49:13.163404       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:49:13.163867       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:49:13.184473       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:49:13.184758       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c!
	I0813 20:49:13.194291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e31d828-490b-41db-8431-f66bfdb15cd4", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c became leader
	I0813 20:49:13.286143       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813204600-30853 -n pause-20210813204600-30853
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813204600-30853 -n pause-20210813204600-30853: exit status 2 (296.837917ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210813204600-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210813204600-30853 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210813204600-30853 describe pod : exit status 1 (68.526969ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210813204600-30853 describe pod : exit status 1
--- FAIL: TestPause/serial/Pause (6.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (2.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210813204600-30853 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210813204600-30853 --output=json --layout=cluster: exit status 2 (300.403651ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210813204600-30853","StatusCode":101,"StatusName":"Pausing","Step":"Pausing","StepDetail":"* Pausing node pause-20210813204600-30853 ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210813204600-30853","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":200,"StatusName":"OK"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:49:20.259923    3758 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0813 20:49:20.259967    3758 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0813 20:49:20.259995    3758 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax

                                                
                                                
** /stderr **
pause_test.go:190: incorrect status code: 101
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813204600-30853 -n pause-20210813204600-30853
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813204600-30853 -n pause-20210813204600-30853: exit status 2 (287.940473ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813204600-30853 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p pause-20210813204600-30853 logs -n 25: (1.254018006s)
helpers_test.go:253: TestPause/serial/VerifyStatus logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                      | multinode-20210813202419-30853-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:13 UTC | Fri, 13 Aug 2021 20:39:14 UTC |
	|         | multinode-20210813202419-30853-m03      |                                         |         |         |                               |                               |
	| -p      | multinode-20210813202419-30853          | multinode-20210813202419-30853          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:14 UTC | Fri, 13 Aug 2021 20:39:16 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| delete  | -p                                      | multinode-20210813202419-30853          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:16 UTC | Fri, 13 Aug 2021 20:39:18 UTC |
	|         | multinode-20210813202419-30853          |                                         |         |         |                               |                               |
	| start   | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:02 UTC | Fri, 13 Aug 2021 20:43:38 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | --wait=true --preload=false             |                                         |         |         |                               |                               |
	|         | --driver=kvm2                           |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0            |                                         |         |         |                               |                               |
	| ssh     | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:38 UTC | Fri, 13 Aug 2021 20:43:41 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | -- sudo crictl pull busybox             |                                         |         |         |                               |                               |
	| start   | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:41 UTC | Fri, 13 Aug 2021 20:44:22 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2          |                                         |         |         |                               |                               |
	|         |  --container-runtime=crio               |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3            |                                         |         |         |                               |                               |
	| ssh     | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:22 UTC | Fri, 13 Aug 2021 20:44:22 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | -- sudo crictl image ls                 |                                         |         |         |                               |                               |
	| -p      | test-preload-20210813204102-30853       | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:22 UTC | Fri, 13 Aug 2021 20:44:24 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| delete  | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:25 UTC | Fri, 13 Aug 2021 20:44:26 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	| start   | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:26 UTC | Fri, 13 Aug 2021 20:45:21 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2             |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| stop    | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:21 UTC | Fri, 13 Aug 2021 20:45:21 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --cancel-scheduled                      |                                         |         |         |                               |                               |
	| stop    | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:34 UTC | Fri, 13 Aug 2021 20:45:42 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --schedule 5s                           |                                         |         |         |                               |                               |
	| delete  | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:59 UTC | Fri, 13 Aug 2021 20:46:00 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	| start   | -p                                      | force-systemd-env-20210813204600-30853  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:47:02 UTC |
	|         | force-systemd-env-20210813204600-30853  |                                         |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | -v=5 --driver=kvm2                      |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| delete  | -p                                      | force-systemd-env-20210813204600-30853  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:02 UTC | Fri, 13 Aug 2021 20:47:03 UTC |
	|         | force-systemd-env-20210813204600-30853  |                                         |         |         |                               |                               |
	| delete  | -p                                      | kubenet-20210813204703-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:03 UTC | Fri, 13 Aug 2021 20:47:03 UTC |
	|         | kubenet-20210813204703-30853            |                                         |         |         |                               |                               |
	| delete  | -p false-20210813204703-30853           | false-20210813204703-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:04 UTC | Fri, 13 Aug 2021 20:47:04 UTC |
	| start   | -p                                      | kubernetes-upgrade-20210813204600-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:47:42 UTC |
	|         | kubernetes-upgrade-20210813204600-30853 |                                         |         |         |                               |                               |
	|         | --memory=2200                           |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0            |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2    |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| stop    | -p                                      | kubernetes-upgrade-20210813204600-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:42 UTC | Fri, 13 Aug 2021 20:47:44 UTC |
	|         | kubernetes-upgrade-20210813204600-30853 |                                         |         |         |                               |                               |
	| start   | -p                                      | offline-crio-20210813204600-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:48:55 UTC |
	|         | offline-crio-20210813204600-30853       |                                         |         |         |                               |                               |
	|         | --alsologtostderr                       |                                         |         |         |                               |                               |
	|         | -v=1 --memory=2048                      |                                         |         |         |                               |                               |
	|         | --wait=true --driver=kvm2               |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| delete  | -p                                      | offline-crio-20210813204600-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:55 UTC | Fri, 13 Aug 2021 20:48:57 UTC |
	|         | offline-crio-20210813204600-30853       |                                         |         |         |                               |                               |
	| start   | -p pause-20210813204600-30853           | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:49:06 UTC |
	|         | --memory=2048                           |                                         |         |         |                               |                               |
	|         | --install-addons=false                  |                                         |         |         |                               |                               |
	|         | --wait=all --driver=kvm2                |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| start   | -p pause-20210813204600-30853           | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:06 UTC | Fri, 13 Aug 2021 20:49:13 UTC |
	|         | --alsologtostderr                       |                                         |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                      |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| -p      | pause-20210813204600-30853              | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:16 UTC | Fri, 13 Aug 2021 20:49:17 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| -p      | pause-20210813204600-30853              | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:18 UTC | Fri, 13 Aug 2021 20:49:19 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:49:06
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:49:06.750460    3412 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:49:06.750532    3412 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:06.750535    3412 out.go:311] Setting ErrFile to fd 2...
	I0813 20:49:06.750538    3412 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:06.750645    3412 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:49:06.750968    3412 out.go:305] Setting JSON to false
	I0813 20:49:06.794979    3412 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":9109,"bootTime":1628878638,"procs":188,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:49:06.795299    3412 start.go:121] virtualization: kvm guest
	I0813 20:49:06.798215    3412 out.go:177] * [pause-20210813204600-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:49:06.799922    3412 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:49:06.798386    3412 notify.go:169] Checking for updates...
	I0813 20:49:06.801691    3412 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:49:06.803336    3412 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:49:06.804849    3412 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:49:06.805220    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:06.805637    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.805697    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.817202    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35163
	I0813 20:49:06.817597    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.818173    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.818195    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.818649    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.818887    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.819077    3412 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:49:06.819425    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.819465    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.830844    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0813 20:49:06.831324    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.831848    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.831871    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.832233    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.832415    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.865593    3412 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 20:49:06.865627    3412 start.go:278] selected driver: kvm2
	I0813 20:49:06.865641    3412 start.go:751] validating driver "kvm2" against &{Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:06.865757    3412 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:49:06.866497    3412 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:49:06.866703    3412 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:49:06.878129    3412 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:49:06.878764    3412 cni.go:93] Creating CNI manager for ""
	I0813 20:49:06.878779    3412 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:49:06.878789    3412 start_flags.go:277] config:
	{Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:06.878936    3412 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:49:06.881128    3412 out.go:177] * Starting control plane node pause-20210813204600-30853 in cluster pause-20210813204600-30853
	I0813 20:49:06.881153    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:06.881197    3412 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:49:06.881216    3412 cache.go:56] Caching tarball of preloaded images
	I0813 20:49:06.881339    3412 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:49:06.881361    3412 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:49:06.881476    3412 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/config.json ...
	I0813 20:49:06.881656    3412 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:49:06.881687    3412 start.go:313] acquiring machines lock for pause-20210813204600-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:49:06.881775    3412 start.go:317] acquired machines lock for "pause-20210813204600-30853" in 71.324µs
	I0813 20:49:06.881794    3412 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:49:06.881801    3412 fix.go:55] fixHost starting: 
	I0813 20:49:06.882135    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.882177    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.894411    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0813 20:49:06.894958    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.895630    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.895652    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.896024    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.896206    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.896395    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:06.899827    3412 fix.go:108] recreateIfNeeded on pause-20210813204600-30853: state=Running err=<nil>
	W0813 20:49:06.899844    3412 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:49:05.079802    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:06.902070    3412 out.go:177] * Updating the running kvm2 "pause-20210813204600-30853" VM ...
	I0813 20:49:06.902100    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.902283    3412 machine.go:88] provisioning docker machine ...
	I0813 20:49:06.902305    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.902430    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:06.902571    3412 buildroot.go:166] provisioning hostname "pause-20210813204600-30853"
	I0813 20:49:06.902599    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:06.902737    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:06.908023    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:06.908395    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:06.908431    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:06.908509    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:06.908703    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:06.908861    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:06.908990    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:06.909175    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:06.909381    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:06.909399    3412 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210813204600-30853 && echo "pause-20210813204600-30853" | sudo tee /etc/hostname
	I0813 20:49:07.062168    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210813204600-30853
	
	I0813 20:49:07.062210    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.068189    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.068544    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.068577    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.068759    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.068953    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.069117    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.069259    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.069439    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:07.069612    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:07.069649    3412 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210813204600-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210813204600-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210813204600-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:49:07.221530    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:49:07.221612    3412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:49:07.221648    3412 buildroot.go:174] setting up certificates
	I0813 20:49:07.221660    3412 provision.go:83] configureAuth start
	I0813 20:49:07.221672    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:07.221918    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:07.227471    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.227839    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.227868    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.228085    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.232869    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.233213    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.233251    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.233347    3412 provision.go:138] copyHostCerts
	I0813 20:49:07.233436    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:49:07.233450    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:49:07.233511    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:49:07.233650    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:49:07.233667    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:49:07.233695    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:49:07.233774    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:49:07.233784    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:49:07.233812    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:49:07.233859    3412 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.pause-20210813204600-30853 san=[192.168.39.61 192.168.39.61 localhost 127.0.0.1 minikube pause-20210813204600-30853]
	I0813 20:49:07.320299    3412 provision.go:172] copyRemoteCerts
	I0813 20:49:07.320390    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:49:07.320428    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.325783    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.326112    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.326152    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.326310    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.326478    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.326610    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.326733    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:07.427180    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:49:07.450672    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0813 20:49:07.471272    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:49:07.489660    3412 provision.go:86] duration metric: configureAuth took 267.984336ms
	I0813 20:49:07.489686    3412 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:49:07.489862    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:07.489982    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.495300    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.495618    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.495653    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.495797    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.495985    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.496150    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.496279    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.496434    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:07.496609    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:07.496631    3412 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:49:08.602797    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:49:08.602830    3412 machine.go:91] provisioned docker machine in 1.700528876s
	I0813 20:49:08.602841    3412 start.go:267] post-start starting for "pause-20210813204600-30853" (driver="kvm2")
	I0813 20:49:08.602846    3412 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:49:08.602880    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.603196    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:49:08.603247    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.608420    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.608704    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.608735    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.608875    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.609064    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.609198    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.609343    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:08.709733    3412 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:49:08.715709    3412 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:49:08.715731    3412 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:49:08.715792    3412 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:49:08.715871    3412 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 20:49:08.715956    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:49:08.724293    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:49:08.750217    3412 start.go:270] post-start completed in 147.362269ms
	I0813 20:49:08.750260    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.750492    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.756215    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.756621    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.756650    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.756812    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.757034    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.757170    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.757300    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.757480    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:08.757670    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:08.757683    3412 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 20:49:08.900897    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628887748.901369788
	
	I0813 20:49:08.900932    3412 fix.go:212] guest clock: 1628887748.901369788
	I0813 20:49:08.900944    3412 fix.go:225] Guest: 2021-08-13 20:49:08.901369788 +0000 UTC Remote: 2021-08-13 20:49:08.750472863 +0000 UTC m=+2.052052145 (delta=150.896925ms)
	I0813 20:49:08.900988    3412 fix.go:196] guest clock delta is within tolerance: 150.896925ms
	I0813 20:49:08.900996    3412 fix.go:57] fixHost completed within 2.019194265s
	I0813 20:49:08.901002    3412 start.go:80] releasing machines lock for "pause-20210813204600-30853", held for 2.019216553s
	I0813 20:49:08.901046    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.901309    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:08.906817    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.907191    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.907257    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.907379    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.907574    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.908140    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.908391    3412 ssh_runner.go:149] Run: systemctl --version
	I0813 20:49:08.908418    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.908488    3412 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:49:08.908539    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.915229    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.915547    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.915580    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.915727    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.915920    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.916011    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.916080    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.916237    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:08.916429    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.916461    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.916636    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.916784    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.917107    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.917257    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:09.014176    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:09.014353    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:09.061257    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:09.061287    3412 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:49:09.061352    3412 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:49:09.075880    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:49:09.086949    3412 docker.go:153] disabling docker service ...
	I0813 20:49:09.087012    3412 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:49:09.103245    3412 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:49:09.117178    3412 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:49:09.373507    3412 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:49:09.585738    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:49:09.599794    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:49:09.615240    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:49:09.623727    3412 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:49:09.630919    3412 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:49:09.637747    3412 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:49:09.808564    3412 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:49:09.952030    3412 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:49:09.952144    3412 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:49:09.959400    3412 start.go:413] Will wait 60s for crictl version
	I0813 20:49:09.959452    3412 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:49:09.991124    3412 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 20:49:09.991251    3412 ssh_runner.go:149] Run: crio --version
	I0813 20:49:10.280528    3412 ssh_runner.go:149] Run: crio --version
	I0813 20:49:10.528655    3412 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 20:49:10.528694    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:10.534359    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:10.534782    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:10.534815    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:10.535076    3412 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 20:49:10.539953    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:10.540017    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:10.583397    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:10.583419    3412 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:49:10.583459    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:10.620617    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:10.620642    3412 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:49:10.620703    3412 ssh_runner.go:149] Run: crio config
	I0813 20:49:10.896405    3412 cni.go:93] Creating CNI manager for ""
	I0813 20:49:10.896427    3412 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:49:10.896436    3412 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:49:10.896448    3412 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.61 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210813204600-30853 NodeName:pause-20210813204600-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.61 CgroupDriver:systemd ClientCAFile:/var/lib/m
inikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:49:10.896629    3412 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "pause-20210813204600-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:49:10.896754    3412 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=pause-20210813204600-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.61 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:49:10.896819    3412 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:49:10.911638    3412 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:49:10.911723    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:49:10.920269    3412 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (506 bytes)
	I0813 20:49:10.933623    3412 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:49:10.945877    3412 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I0813 20:49:10.958716    3412 ssh_runner.go:149] Run: grep 192.168.39.61	control-plane.minikube.internal$ /etc/hosts
	I0813 20:49:10.962845    3412 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853 for IP: 192.168.39.61
	I0813 20:49:10.962912    3412 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:49:10.962936    3412 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:49:10.963041    3412 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.key
	I0813 20:49:10.963067    3412 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.key.e9ce627b
	I0813 20:49:10.963088    3412 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.key
	I0813 20:49:10.963223    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 20:49:10.963274    3412 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 20:49:10.963290    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:49:10.963332    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:49:10.963362    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:49:10.963395    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:49:10.963481    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:49:10.964763    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:49:10.996208    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:49:11.015193    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:49:11.032382    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:49:11.050461    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:49:11.067415    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:49:11.085267    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:49:11.102588    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:49:11.128113    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:49:11.146008    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 20:49:11.162723    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 20:49:11.181637    3412 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:49:11.193799    3412 ssh_runner.go:149] Run: openssl version
	I0813 20:49:11.199783    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 20:49:11.209928    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.214459    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.214508    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.221207    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:49:11.229476    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:49:11.237550    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.245454    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.245501    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.251754    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:49:11.258461    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 20:49:11.267146    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.271736    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.271779    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.278000    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 20:49:11.284415    3412 kubeadm.go:390] StartCluster: {Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clu
sterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:11.284518    3412 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:49:11.284561    3412 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:49:11.324305    3412 cri.go:76] found id: "d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6"
	I0813 20:49:11.324324    3412 cri.go:76] found id: "2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164"
	I0813 20:49:11.324329    3412 cri.go:76] found id: "ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf"
	I0813 20:49:11.324336    3412 cri.go:76] found id: "66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf"
	I0813 20:49:11.324339    3412 cri.go:76] found id: "83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659"
	I0813 20:49:11.324343    3412 cri.go:76] found id: "82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b"
	I0813 20:49:11.324347    3412 cri.go:76] found id: ""
	I0813 20:49:11.324383    3412 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:49:11.370394    3412 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","pid":3260,"status":"running","bundle":"/run/containers/storage/overlay-containers/2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164/userdata","rootfs":"/var/lib/containers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","created":"2021-08-13T20:48:25.650799846Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7bfe6d1f","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7bfe6d1f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termination
MessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.433420822Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/c
ontainers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet
/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/containers/kube-proxy/b214a802\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~projected/kube-api-access-qrwsr\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.prop
erty.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","pid":2560,"status":"running","bundle":"/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata","rootfs":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","created":"2021-08-13T20:47:58.451921584Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170563888Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podace00bb4fb8a8a9569ff7dae47e01d30.slice","io.kubernetes.cri-o.ContainerID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.Contai
nerName":"k8s_POD_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.734913609Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30
853_ace00bb4fb8a8a9569ff7dae47e01d30/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210813204600-30853\",\"uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c3
9b238025a67ffbc7ea","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","pid":3063,"status":"running","bundle":"/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata","rootfs":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe9
37ca4df37/merged","created":"2021-08-13T20:48:24.164151322Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.030706859Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod7c8a1bad_1f97_44ad_a3e3_fb9d52cfd0d9.slice","io.kubernetes.cri-o.ContainerID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.009794742Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/hostname","i
o.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-4n8kb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"7cdcb64568\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-4n8kb\",\"uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe937ca4df37/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes
.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/shm","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactiv
e-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","pid":2712,"status":"running","bundle":"/run/containers/storage/overlay-containers/66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf/userdata","rootfs":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","created":"2021-08-13T20:48:00.371988051Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.05184871Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30853_ace00bb4fb8a8a9569ff7dae47e01d30/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"ku
be-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/etc-hosts\",\"readonly\":false},{\"cont
ainer_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/containers/kube-scheduler/1a90a935\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","pid":2531,"status":"running","bundle":"/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af7
8ce2fb71d82b52d87fa45aaf3/userdata","rootfs":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","created":"2021-08-13T20:47:58.134632094Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b586eaff819d4c98a938914befbf359d\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170560054Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podb586eaff819d4c98a938914befbf359d.slice","io.kubernetes.cri-o.ContainerID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.58849323Z","io.kubernetes.cri-o.HostName":"pause-20210
813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210813204600-30853\",\"uid\":\"b586eaff81
9d4c98a938914befbf359d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d8
2b52d87fa45aaf3/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","pid":3202,"status":"running","bundle":"/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata","rootfs":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","created":"2021-08-13T20:48:25.02088557Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/co
nfig.seen\":\"2021-08-13T20:48:23.684666458Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth769c0295\",\"mac\":\"0e:7f:8d:fd:2a:c5\"},{\"name\":\"eth0\",\"mac\":\"46:39:40:9e:ad:d7\",\"sandbox\":\"/var/run/netns/70e99836-e661-4e4f-bfb4-1e8d94b25ad2\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod72033717_35d7_4397_b3c5_28028e7270f3.slice","io.kubernetes.cri-o.ContainerID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.356545063Z","io.kubernetes.cri-o.H
ostName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grvm\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-4grvm\",\"uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.M
ountPoint":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"k
ube-system","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","pid":2593,"status":"running","bundle":"/run/containers/storage/overlay-containers/82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b/userdata","rootfs":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","created":"2021-08-13T20:47:59.106710832Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"46519583","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"Fi
le","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"46519583\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:58.700311118Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.ui
d\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kube
rnetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/containers/kube-apiserver/d05226bf\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.
61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","pid":2654,"status":"running","bundle":"/run/containers/storage/overlay-containers/83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659/userdata","rootfs":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","created":"2021-08-13T20:47:59.879440634Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dfe11a","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePol
icy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dfe11a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:59.302380713Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kub
e-system\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","
io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/containers/kube-controller-manager/3fd07eff\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/v
olume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata","rootfs":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","created":"2021-08-13T20:
48:24.985669139Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.664842879Z\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth8015c076\",\"mac\":\"b6:65:b6:ec:41:c5\"},{\"name\":\"eth0\",\"mac\":\"e2:c2:94:2c:86:54\",\"sandbox\":\"/var/run/netns/18863c2e-48ba-4850-8146-8e155524b6dd\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.3/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod21759cc2_1fdb_417f_bc71_01fb6f9d0c35.slice","io.kubernetes.cri-o.ContainerID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-41
7f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.319998358Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-gm2bv\",\"k8s-app\":\"kube-dns\",\"pod-template-hash\":\"558bd4d5db\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-gm2bv_21759cc2-1fdb-417f-bc71-01fb6f9d0c35/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540.log","io.kubernetes.cri-
o.Metadata":"{\"name\":\"coredns-558bd4d5db-gm2bv\",\"uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-417f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9f35d968
848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-gm2bv","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"21759cc2-1fdb-417f-bc71-01fb6f9d0c35","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.664842879Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf","pid":2754,"status":"running","bundle":"/run/containers/storage/overlay-containers/ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf/userdata","rootfs":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","created":"2021-08-13T20:48:00.893103098Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5d26fc81","io.kubernetes.container.name":"etcd","io.kubernetes.container
.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5d26fc81\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.424653769Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.p
od.name\":\"etcd-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.SeccompProf
ilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/containers/etcd/7df814d9\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d2
2eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","pid":3295,"status":"running","bundle":"/run/containers/storage/overlay-containers/d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6/userdata","rootfs":"/var/lib/containers/storage/overlay/6c5dd04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","created":"2021-08-13T20:48:25.853932123Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"861ab352","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.
kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"861ab352\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.56946163Z","io.kubernetes.cri-o.IP.0":"10
.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grvm\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6c5dd04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/
storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/containers/coredns/baf35c8d\",\"readonly\":false},{\"container_path\
":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~projected/kube-api-access-zsj85\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","pid":2552,"status":"running","bundle":"/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata","rootfs":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","created"
:"2021-08-13T20:47:58.569818878Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.61:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170566946Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod545d21e989d5ed3752d22eeb8bd8ffce.slice","io.kubernetes.cri-o.ContainerID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.638411495Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/sto
rage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"etcd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210813204600-30853\",\"uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","io.kubernet
es.cri-o.Name":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d22eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","pid":2497,"status":"running","bundle":"/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata","rootfs":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","created":"2021-08-13T20:47:57.759478731Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170508472Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"cb76671b6b79a1d55244
9a94a3dbfa98\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.61:8443\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice","io.kubernetes.cri-o.ContainerID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.128395566Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",
\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210813204600-30853\",\"uid\":\"cb76671b6b79a1d552449a94a3dbfa98\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]",
"io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode"
:"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0813 20:49:11.370977    3412 cri.go:113] list returned 13 containers
	I0813 20:49:11.370992    3412 cri.go:116] container: {ID:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 Status:running}
	I0813 20:49:11.371004    3412 cri.go:122] skipping {2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 running}: state = "running", want "paused"
	I0813 20:49:11.371014    3412 cri.go:116] container: {ID:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea Status:running}
	I0813 20:49:11.371019    3412 cri.go:118] skipping 55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea - not in ps
	I0813 20:49:11.371023    3412 cri.go:116] container: {ID:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 Status:running}
	I0813 20:49:11.371028    3412 cri.go:118] skipping 564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 - not in ps
	I0813 20:49:11.371034    3412 cri.go:116] container: {ID:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf Status:running}
	I0813 20:49:11.371040    3412 cri.go:122] skipping {66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf running}: state = "running", want "paused"
	I0813 20:49:11.371048    3412 cri.go:116] container: {ID:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 Status:running}
	I0813 20:49:11.371054    3412 cri.go:118] skipping 6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 - not in ps
	I0813 20:49:11.371063    3412 cri.go:116] container: {ID:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 Status:running}
	I0813 20:49:11.371069    3412 cri.go:118] skipping 8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 - not in ps
	I0813 20:49:11.371076    3412 cri.go:116] container: {ID:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b Status:running}
	I0813 20:49:11.371081    3412 cri.go:122] skipping {82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b running}: state = "running", want "paused"
	I0813 20:49:11.371087    3412 cri.go:116] container: {ID:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659 Status:running}
	I0813 20:49:11.371091    3412 cri.go:122] skipping {83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659 running}: state = "running", want "paused"
	I0813 20:49:11.371099    3412 cri.go:116] container: {ID:9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 Status:stopped}
	I0813 20:49:11.371105    3412 cri.go:118] skipping 9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 - not in ps
	I0813 20:49:11.371110    3412 cri.go:116] container: {ID:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf Status:running}
	I0813 20:49:11.371115    3412 cri.go:122] skipping {ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf running}: state = "running", want "paused"
	I0813 20:49:11.371119    3412 cri.go:116] container: {ID:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6 Status:running}
	I0813 20:49:11.371127    3412 cri.go:122] skipping {d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6 running}: state = "running", want "paused"
	I0813 20:49:11.371135    3412 cri.go:116] container: {ID:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f Status:running}
	I0813 20:49:11.371144    3412 cri.go:118] skipping e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f - not in ps
	I0813 20:49:11.371154    3412 cri.go:116] container: {ID:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 Status:running}
	I0813 20:49:11.371164    3412 cri.go:118] skipping f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 - not in ps
	I0813 20:49:11.371203    3412 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:49:11.379585    3412 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:49:11.379610    3412 kubeadm.go:600] restartCluster start
	I0813 20:49:11.379656    3412 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:49:11.387273    3412 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:49:11.388131    3412 kubeconfig.go:93] found "pause-20210813204600-30853" server: "https://192.168.39.61:8443"
	I0813 20:49:11.389906    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.391540    3412 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:49:11.398645    3412 api_server.go:164] Checking apiserver status ...
	I0813 20:49:11.398727    3412 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:49:11.410339    3412 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2593/cgroup
	I0813 20:49:11.416825    3412 api_server.go:180] apiserver freezer: "11:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice/crio-82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b.scope"
	I0813 20:49:11.416874    3412 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice/crio-82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b.scope/freezer.state
	I0813 20:49:11.424153    3412 api_server.go:202] freezer state: "THAWED"
	I0813 20:49:11.424172    3412 api_server.go:239] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0813 20:49:11.430386    3412 api_server.go:265] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0813 20:49:11.447400    3412 system_pods.go:86] 6 kube-system pods found
	I0813 20:49:11.447439    3412 system_pods.go:89] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:11.447446    3412 system_pods.go:89] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:11.447453    3412 system_pods.go:89] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:11.447457    3412 system_pods.go:89] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:11.447460    3412 system_pods.go:89] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:11.447465    3412 system_pods.go:89] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:11.448566    3412 api_server.go:139] control plane version: v1.21.3
	I0813 20:49:11.448586    3412 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.39.61
	I0813 20:49:11.448597    3412 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0813 20:49:11.448603    3412 kubeadm.go:604] restartCluster took 68.987456ms
	I0813 20:49:11.448610    3412 kubeadm.go:392] StartCluster complete in 164.201481ms
	I0813 20:49:11.448627    3412 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:49:11.448743    3412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:49:11.449587    3412 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:49:11.450509    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.454641    3412 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210813204600-30853" rescaled to 1
	I0813 20:49:11.454698    3412 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:49:11.454707    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:49:11.456952    3412 out.go:177] * Verifying Kubernetes components...
	I0813 20:49:11.457008    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:11.454754    3412 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:49:11.457069    3412 addons.go:59] Setting storage-provisioner=true in profile "pause-20210813204600-30853"
	I0813 20:49:11.455000    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:11.457090    3412 addons.go:135] Setting addon storage-provisioner=true in "pause-20210813204600-30853"
	W0813 20:49:11.457098    3412 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:49:11.457112    3412 addons.go:59] Setting default-storageclass=true in profile "pause-20210813204600-30853"
	I0813 20:49:11.457130    3412 host.go:66] Checking if "pause-20210813204600-30853" exists ...
	I0813 20:49:11.457136    3412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210813204600-30853"
	I0813 20:49:11.457449    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.457490    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.457642    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.457688    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.468728    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0813 20:49:11.469146    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.469685    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.469705    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.470063    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.470584    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.470626    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.476732    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0813 20:49:11.477171    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.477677    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.477701    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.478079    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.478277    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.482479    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.483740    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45299
	I0813 20:49:11.484114    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.484536    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.484555    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.484941    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.485097    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.487884    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:11.490267    3412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:49:11.488882    3412 addons.go:135] Setting addon default-storageclass=true in "pause-20210813204600-30853"
	W0813 20:49:11.490289    3412 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:49:11.490323    3412 host.go:66] Checking if "pause-20210813204600-30853" exists ...
	I0813 20:49:11.490374    3412 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:49:11.490389    3412 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:49:11.490406    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:11.490689    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.490728    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.496655    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.497065    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:11.497093    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.497244    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:11.497423    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:11.497618    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:11.497767    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:11.503422    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34471
	I0813 20:49:11.503821    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.504277    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.504306    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.504582    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.505173    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.505219    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.518799    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36859
	I0813 20:49:11.519214    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.519629    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.519655    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.519995    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.520180    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.523435    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:11.523650    3412 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:49:11.523666    3412 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:49:11.523682    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:11.529028    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.529396    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:11.529423    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.529571    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:11.529736    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:11.529865    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:11.530004    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:11.605965    3412 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:49:11.606090    3412 node_ready.go:35] waiting up to 6m0s for node "pause-20210813204600-30853" to be "Ready" ...
	I0813 20:49:11.610421    3412 node_ready.go:49] node "pause-20210813204600-30853" has status "Ready":"True"
	I0813 20:49:11.610442    3412 node_ready.go:38] duration metric: took 4.320432ms waiting for node "pause-20210813204600-30853" to be "Ready" ...
	I0813 20:49:11.610453    3412 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:49:11.616546    3412 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:49:11.616740    3412 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.631733    3412 pod_ready.go:92] pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.631757    3412 pod_ready.go:81] duration metric: took 14.992576ms waiting for pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.631771    3412 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.639091    3412 pod_ready.go:92] pod "etcd-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.639117    3412 pod_ready.go:81] duration metric: took 7.33748ms waiting for pod "etcd-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.639129    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.645487    3412 pod_ready.go:92] pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.645508    3412 pod_ready.go:81] duration metric: took 6.370538ms waiting for pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.645519    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.652583    3412 pod_ready.go:92] pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.652602    3412 pod_ready.go:81] duration metric: took 7.073719ms waiting for pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.652614    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4n8kb" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.658710    3412 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:49:12.038755    3412 pod_ready.go:92] pod "kube-proxy-4n8kb" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:12.038776    3412 pod_ready.go:81] duration metric: took 386.155583ms waiting for pod "kube-proxy-4n8kb" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.038787    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.069005    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069032    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069056    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069036    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069332    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069333    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069336    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069348    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069357    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069364    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069368    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069371    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069377    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069380    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069631    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069649    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069664    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069635    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069693    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069706    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069717    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069914    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069931    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.071889    3412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:49:12.071910    3412 addons.go:344] enableAddons completed in 617.161828ms
	I0813 20:49:12.434704    3412 pod_ready.go:92] pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:12.434726    3412 pod_ready.go:81] duration metric: took 395.931948ms waiting for pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.434734    3412 pod_ready.go:38] duration metric: took 824.269103ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:49:12.434752    3412 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:49:12.434790    3412 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:49:12.451457    3412 api_server.go:70] duration metric: took 996.725767ms to wait for apiserver process to appear ...
	I0813 20:49:12.451487    3412 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:49:12.451500    3412 api_server.go:239] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0813 20:49:12.457776    3412 api_server.go:265] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0813 20:49:12.458697    3412 api_server.go:139] control plane version: v1.21.3
	I0813 20:49:12.458716    3412 api_server.go:129] duration metric: took 7.221294ms to wait for apiserver health ...
	I0813 20:49:12.458726    3412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:49:12.637203    3412 system_pods.go:59] 7 kube-system pods found
	I0813 20:49:12.637240    3412 system_pods.go:61] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:12.637248    3412 system_pods.go:61] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:12.637254    3412 system_pods.go:61] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:12.637261    3412 system_pods.go:61] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:12.637266    3412 system_pods.go:61] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:12.637272    3412 system_pods.go:61] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:12.637281    3412 system_pods.go:61] "storage-provisioner" [aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:49:12.637290    3412 system_pods.go:74] duration metric: took 178.557519ms to wait for pod list to return data ...
	I0813 20:49:12.637299    3412 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:49:12.841324    3412 default_sa.go:45] found service account: "default"
	I0813 20:49:12.841350    3412 default_sa.go:55] duration metric: took 204.040505ms for default service account to be created ...
	I0813 20:49:12.841359    3412 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:49:13.042158    3412 system_pods.go:86] 7 kube-system pods found
	I0813 20:49:13.042205    3412 system_pods.go:89] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:13.042216    3412 system_pods.go:89] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:13.042224    3412 system_pods.go:89] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:13.042237    3412 system_pods.go:89] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:13.042245    3412 system_pods.go:89] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:13.042257    3412 system_pods.go:89] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:13.042278    3412 system_pods.go:89] "storage-provisioner" [aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:49:13.042295    3412 system_pods.go:126] duration metric: took 200.930278ms to wait for k8s-apps to be running ...
	I0813 20:49:13.042313    3412 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:49:13.042369    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:13.056816    3412 system_svc.go:56] duration metric: took 14.491659ms WaitForService to wait for kubelet.
	I0813 20:49:13.056852    3412 kubeadm.go:547] duration metric: took 1.60212918s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:49:13.056882    3412 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:49:13.236184    3412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:49:13.236241    3412 node_conditions.go:123] node cpu capacity is 2
	I0813 20:49:13.236260    3412 node_conditions.go:105] duration metric: took 179.373183ms to run NodePressure ...
	I0813 20:49:13.236273    3412 start.go:231] waiting for startup goroutines ...
	I0813 20:49:13.296415    3412 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:49:13.298518    3412 out.go:177] * Done! kubectl is now configured to use "pause-20210813204600-30853" cluster and "default" namespace by default
	I0813 20:49:10.080830    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:49:10.579566    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:14.540519    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": read tcp 192.168.50.1:40792->192.168.50.24:8443: read: connection reset by peer
	I0813 20:49:14.579739    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:14.580451    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:15.079298    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:15.079947    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:15.579678    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:15.580450    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:16.078922    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:16.079480    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:16.578921    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:16.579558    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:17.079061    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:17.079634    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:17.578938    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:17.579564    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:18.078941    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:18.079479    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:18.579014    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:18.579747    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:19.078958    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:19.079711    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:19.578954    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:19.579634    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:47:17 UTC, end at Fri 2021-08-13 20:49:21 UTC. --
	Aug 13 20:49:19 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:19.459389380Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,StartedAt:1628887753062172592,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/containers/storage-provisioner/3a59d7be,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/volumes/kubernetes.io~projected/kube-api-access-8s2qn,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/storage-prov
isioner/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=e2904b0d-fccb-4ad4-a317-34e5477457b8 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Aug 13 20:49:20 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:20.839500010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bddaabd4-6bbf-49f4-b400-db0407cc87b2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:20 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:20.840381769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bddaabd4-6bbf-49f4-b400-db0407cc87b2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:20 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:20.840885918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bddaabd4-6bbf-49f4-b400-db0407cc87b2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:20 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:20.890152241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fe249be4-d84e-417b-86f5-4c8dfe2050ce name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:20 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:20.890443743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fe249be4-d84e-417b-86f5-4c8dfe2050ce name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:20 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:20.890954735Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fe249be4-d84e-417b-86f5-4c8dfe2050ce name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:20 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:20.939623565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c08833d3-a4d9-475e-b288-9f05aeda51d5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:20 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:20.939813547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c08833d3-a4d9-475e-b288-9f05aeda51d5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:20 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:20.940128410Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c08833d3-a4d9-475e-b288-9f05aeda51d5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:20 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:20.987130115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=33263373-5fdc-4c4b-8d6c-b001a701da75 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:20 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:20.987387215Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=33263373-5fdc-4c4b-8d6c-b001a701da75 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:20 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:20.987605146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=33263373-5fdc-4c4b-8d6c-b001a701da75 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:21 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:21.032349841Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e9631b6b-75bd-4e26-a7e9-338491ae182d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:21 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:21.032494130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e9631b6b-75bd-4e26-a7e9-338491ae182d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:21 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:21.032651187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e9631b6b-75bd-4e26-a7e9-338491ae182d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:21 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:21.078427138Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f3d8cf46-16dc-4d0b-9e7c-fc86efa25afc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:21 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:21.078509682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f3d8cf46-16dc-4d0b-9e7c-fc86efa25afc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:21 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:21.078714174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f3d8cf46-16dc-4d0b-9e7c-fc86efa25afc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:21 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:21.131761841Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7b87730b-02fa-4525-8d0f-55cf0ae6bbbc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:21 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:21.131845932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7b87730b-02fa-4525-8d0f-55cf0ae6bbbc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:21 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:21.132105031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7b87730b-02fa-4525-8d0f-55cf0ae6bbbc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:21 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:21.175163717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=69d08983-b56b-4479-bd1f-411c72504c80 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:21 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:21.175297449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=69d08983-b56b-4479-bd1f-411c72504c80 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:21 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:21.175460144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=69d08983-b56b-4479-bd1f-411c72504c80 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	10dab2af99578       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   8 seconds ago        Running             storage-provisioner       0                   2a6ab48b5042a
	d33287457e451       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   55 seconds ago       Running             coredns                   0                   8088cc5d3d38a
	2e50c328d7104       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   55 seconds ago       Running             kube-proxy                0                   564d5f18f75ed
	ac4bf726a8a57       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   About a minute ago   Running             etcd                      0                   e992003133001
	66655950d3afa       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   About a minute ago   Running             kube-scheduler            0                   55ddf08f50f8c
	83df9633ff352       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   About a minute ago   Running             kube-controller-manager   0                   6c56d5bf50b7a
	82d4de99d88e5       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   About a minute ago   Running             kube-apiserver            0                   f228ab759c26a
	
	* 
	* ==> coredns [d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6] <==
	* I0813 20:48:56.155624       1 trace.go:205] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.152) (total time: 30002ms):
	Trace[1427131847]: [30.002619331s] [30.002619331s] END
	E0813 20:48:56.155739       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:48:56.155858       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.154) (total time: 30001ms):
	Trace[911902081]: [30.001733139s] [30.001733139s] END
	E0813 20:48:56.155865       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:48:56.155918       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.152) (total time: 30002ms):
	Trace[2019727887]: [30.002706635s] [30.002706635s] END
	E0813 20:48:56.156104       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210813204600-30853
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210813204600-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=pause-20210813204600-30853
	                    minikube.k8s.io/updated_at=2021_08_13T20_48_11_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:48:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210813204600-30853
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:49:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    pause-20210813204600-30853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	System Info:
	  Machine ID:                 07e647a52575478182b10082d1b9460a
	  System UUID:                07e647a5-2575-4781-82b1-0082d1b9460a
	  Boot ID:                    1c1f8243-ce7f-455c-a669-de6493424040
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-4grvm                              100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     58s
	  kube-system                 etcd-pause-20210813204600-30853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         64s
	  kube-system                 kube-apiserver-pause-20210813204600-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-controller-manager-pause-20210813204600-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-proxy-4n8kb                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-scheduler-pause-20210813204600-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 storage-provisioner                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  87s (x6 over 87s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x5 over 87s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x5 over 87s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientPID
	  Normal  Starting                 65s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  64s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                63s                kubelet     Node pause-20210813204600-30853 status is now: NodeReady
	  Normal  Starting                 55s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	*               If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000025] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +5.165176] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.050992] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.137498] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1726 comm=systemd-network
	[  +1.376463] vboxguest: loading out-of-tree module taints kernel.
	[  +0.007022] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.624786] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +20.400328] systemd-fstab-generator[2162]: Ignoring "noauto" for root device
	[  +0.134832] systemd-fstab-generator[2175]: Ignoring "noauto" for root device
	[  +0.282454] systemd-fstab-generator[2201]: Ignoring "noauto" for root device
	[  +6.552961] systemd-fstab-generator[2405]: Ignoring "noauto" for root device
	[Aug13 20:48] systemd-fstab-generator[2800]: Ignoring "noauto" for root device
	[ +13.894926] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.479825] kauditd_printk_skb: 80 callbacks suppressed
	[Aug13 20:49] kauditd_printk_skb: 14 callbacks suppressed
	[  +4.187207] systemd-fstab-generator[4013]: Ignoring "noauto" for root device
	[  +0.260965] systemd-fstab-generator[4026]: Ignoring "noauto" for root device
	[  +0.242550] systemd-fstab-generator[4048]: Ignoring "noauto" for root device
	[  +3.941917] systemd-fstab-generator[4299]: Ignoring "noauto" for root device
	[  +0.801138] systemd-fstab-generator[4353]: Ignoring "noauto" for root device
	[  +1.042940] systemd-fstab-generator[4407]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf] <==
	* 2021-08-13 20:48:01.922733 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:48:01.952757 I | embed: serving client requests on 192.168.39.61:2379
	2021-08-13 20:48:01.954160 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:48:01.975055 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:48:12.629799 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (355.071918ms) to execute
	2021-08-13 20:48:18.621673 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" " with result "range_response_count:0 size:5" took too long (1.837036221s) to execute
	2021-08-13 20:48:18.622362 W | wal: sync duration of 1.607346013s, expected less than 1s
	2021-08-13 20:48:18.623060 W | etcdserver: request "header:<ID:12771218163585540132 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-20210813204600-30853.169af8bae7fa23bf\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-20210813204600-30853.169af8bae7fa23bf\" value_size:632 lease:3547846126730764118 >> failure:<>>" with result "size:16" took too long (1.606807479s) to execute
	2021-08-13 20:48:18.624926 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.461501725s) to execute
	2021-08-13 20:48:18.628021 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210813204600-30853\" " with result "range_response_count:1 size:3982" took too long (1.370325429s) to execute
	2021-08-13 20:48:21.346921 W | wal: sync duration of 1.299304523s, expected less than 1s
	2021-08-13 20:48:21.347401 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.068677828s) to execute
	2021-08-13 20:48:24.481477 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:26.500706 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813204600-30853\" " with result "range_response_count:1 size:6093" took too long (233.724165ms) to execute
	2021-08-13 20:48:26.501137 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-gm2bv\" " with result "range_response_count:1 size:4473" took too long (378.683681ms) to execute
	2021-08-13 20:48:26.502059 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-4grvm\" " with result "range_response_count:1 size:4461" took too long (270.883259ms) to execute
	2021-08-13 20:48:28.869625 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:38.868019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:48.868044 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:58.870803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:49:00.399177 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (1.157615469s) to execute
	2021-08-13 20:49:00.400612 W | etcdserver: request "header:<ID:12771218163585540646 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" mod_revision:468 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" value_size:584 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" > >>" with result "size:16" took too long (200.747119ms) to execute
	2021-08-13 20:49:00.400917 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813204600-30853\" " with result "range_response_count:1 size:6093" took too long (1.158534213s) to execute
	2021-08-13 20:49:00.401297 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (569.698ms) to execute
	2021-08-13 20:49:08.868736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:49:21 up 2 min,  0 users,  load average: 1.86, 0.77, 0.29
	Linux pause-20210813204600-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b] <==
	* Trace[1175388272]: [1.383273804s] [1.383273804s] END
	I0813 20:48:18.647776       1 trace.go:205] Trace[1480647024]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.255) (total time: 1391ms):
	Trace[1480647024]: ---"Object stored in database" 1379ms (20:48:00.638)
	Trace[1480647024]: [1.391864844s] [1.391864844s] END
	I0813 20:48:18.651341       1 trace.go:205] Trace[532588033]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.256) (total time: 1395ms):
	Trace[532588033]: [1.395160654s] [1.395160654s] END
	I0813 20:48:18.651913       1 trace.go:205] Trace[486245217]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.256) (total time: 1395ms):
	Trace[486245217]: [1.395849853s] [1.395849853s] END
	I0813 20:48:18.659173       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:48:21.348539       1 trace.go:205] Trace[264690694]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:20.278) (total time: 1070ms):
	Trace[264690694]: [1.070400996s] [1.070400996s] END
	I0813 20:48:22.995388       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:48:23.545730       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:48:37.713151       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:48:37.713388       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:48:37.713410       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:49:00.401993       1 trace.go:205] Trace[875370503]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:48:59.240) (total time: 1161ms):
	Trace[875370503]: ---"About to write a response" 1161ms (20:49:00.401)
	Trace[875370503]: [1.161749328s] [1.161749328s] END
	I0813 20:49:00.403705       1 trace.go:205] Trace[1375945297]: "Get" url:/api/v1/nodes/pause-20210813204600-30853,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.39.1,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:48:59.241) (total time: 1162ms):
	Trace[1375945297]: ---"About to write a response" 1161ms (20:49:00.403)
	Trace[1375945297]: [1.162052238s] [1.162052238s] END
	I0813 20:49:08.639766       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:49:08.639943       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:49:08.639963       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659] <==
	* I0813 20:48:22.670523       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0813 20:48:22.676047       1 shared_informer.go:247] Caches are synced for job 
	I0813 20:48:22.676648       1 shared_informer.go:247] Caches are synced for GC 
	I0813 20:48:22.680632       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0813 20:48:22.680827       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0813 20:48:22.713877       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0813 20:48:22.743162       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:48:22.743798       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:48:22.849717       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0813 20:48:22.888695       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:48:22.888733       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:48:22.923738       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:48:22.923844       1 disruption.go:371] Sending events to api server.
	I0813 20:48:22.939921       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:48:23.006118       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4n8kb"
	E0813 20:48:23.080425       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"4ec5a127-3b2a-4f66-8321-f0bab85709c0", ResourceVersion:"304", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764484491, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000abfda0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000abfdb8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc0014a9280), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00142b740), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000abf
dd0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000abfde8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014a92c0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001419440), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00144e5a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000843e30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00163c430)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00144e608)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0813 20:48:23.316478       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:48:23.352329       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:48:23.352427       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:48:23.554638       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:48:23.583893       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:48:23.645559       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-gm2bv"
	I0813 20:48:23.652683       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-4grvm"
	I0813 20:48:23.772425       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-gm2bv"
	
	* 
	* ==> kube-proxy [2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164] <==
	* I0813 20:48:26.523023       1 node.go:172] Successfully retrieved node IP: 192.168.39.61
	I0813 20:48:26.523578       1 server_others.go:140] Detected node IP 192.168.39.61
	W0813 20:48:26.523867       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:48:26.597173       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:48:26.597466       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:48:26.597629       1 server_others.go:212] Using iptables Proxier.
	I0813 20:48:26.599876       1 server.go:643] Version: v1.21.3
	I0813 20:48:26.601871       1 config.go:315] Starting service config controller
	I0813 20:48:26.601925       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:48:26.601964       1 config.go:224] Starting endpoint slice config controller
	I0813 20:48:26.601993       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:48:26.626937       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:48:26.631306       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:48:26.702322       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:48:26.702322       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf] <==
	* E0813 20:48:07.253858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:48:07.253939       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:48:07.254089       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:07.254299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:07.254407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:48:07.254763       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:48:07.256625       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:48:07.257805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:48:07.257988       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:48:07.258811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:48:07.259413       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:48:07.261132       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.091658       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:48:08.147159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:48:08.202089       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:48:08.257172       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:48:08.318956       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.416964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:48:08.426635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:48:08.429682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.498271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.623065       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:48:08.623400       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.652497       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0813 20:48:11.848968       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:47:17 UTC, end at Fri 2021-08-13 20:49:21 UTC. --
	Aug 13 20:48:37 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:37.311628    2809 scope.go:111] "RemoveContainer" containerID="09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92"
	Aug 13 20:48:37 pause-20210813204600-30853 kubelet[2809]: E0813 20:48:37.324554    2809 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92\": container with ID starting with 09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92 not found: ID does not exist" containerID="09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92"
	Aug 13 20:48:37 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:37.324683    2809 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92} err="failed to get container status \"09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92\": rpc error: code = NotFound desc = could not find container \"09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92\": container with ID starting with 09872bc6cd6af1bc00570fe8d1a5d72a4cabd00b48d1b9fc6af7bc4b15790d92 not found: ID does not exist"
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: W0813 20:48:38.042626    2809 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/21759cc2-1fdb-417f-bc71-01fb6f9d0c35/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.043002    2809 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-config-volume" (OuterVolumeSpecName: "config-volume") pod "21759cc2-1fdb-417f-bc71-01fb6f9d0c35" (UID: "21759cc2-1fdb-417f-bc71-01fb6f9d0c35"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.043446    2809 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-config-volume\") pod \"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\" (UID: \"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\") "
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.043655    2809 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wdzg\" (UniqueName: \"kubernetes.io/projected/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-kube-api-access-2wdzg\") pod \"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\" (UID: \"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\") "
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.044383    2809 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-config-volume\") on node \"pause-20210813204600-30853\" DevicePath \"\""
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.054821    2809 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-kube-api-access-2wdzg" (OuterVolumeSpecName: "kube-api-access-2wdzg") pod "21759cc2-1fdb-417f-bc71-01fb6f9d0c35" (UID: "21759cc2-1fdb-417f-bc71-01fb6f9d0c35"). InnerVolumeSpecName "kube-api-access-2wdzg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 20:48:38 pause-20210813204600-30853 kubelet[2809]: I0813 20:48:38.145496    2809 reconciler.go:319] "Volume detached for volume \"kube-api-access-2wdzg\" (UniqueName: \"kubernetes.io/projected/21759cc2-1fdb-417f-bc71-01fb6f9d0c35-kube-api-access-2wdzg\") on node \"pause-20210813204600-30853\" DevicePath \"\""
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: W0813 20:49:07.659903    2809 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: W0813 20:49:07.660584    2809 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:07.734933    2809 remote_image.go:71] "ListImages with filter from image service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:07.735512    2809 kuberuntime_image.go:136] "Failed to list images" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:49:07 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:07.735759    2809 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: failed to get image stats: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:49:08 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:08.446980    2809 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Aug 13 20:49:08 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:08.447053    2809 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:49:08 pause-20210813204600-30853 kubelet[2809]: E0813 20:49:08.447095    2809 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:49:12 pause-20210813204600-30853 kubelet[2809]: I0813 20:49:12.083985    2809 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:49:12 pause-20210813204600-30853 kubelet[2809]: I0813 20:49:12.126901    2809 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76-tmp\") pod \"storage-provisioner\" (UID: \"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\") "
	Aug 13 20:49:12 pause-20210813204600-30853 kubelet[2809]: I0813 20:49:12.127447    2809 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2qn\" (UniqueName: \"kubernetes.io/projected/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76-kube-api-access-8s2qn\") pod \"storage-provisioner\" (UID: \"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\") "
	Aug 13 20:49:13 pause-20210813204600-30853 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:49:13 pause-20210813204600-30853 kubelet[2809]: I0813 20:49:13.808051    2809 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:49:13 pause-20210813204600-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:49:13 pause-20210813204600-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5] <==
	* I0813 20:49:13.139876       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:49:13.163404       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:49:13.163867       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:49:13.184473       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:49:13.184758       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c!
	I0813 20:49:13.194291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e31d828-490b-41db-8431-f66bfdb15cd4", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c became leader
	I0813 20:49:13.286143       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813204600-30853 -n pause-20210813204600-30853
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813204600-30853 -n pause-20210813204600-30853: exit status 2 (301.153382ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210813204600-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/VerifyStatus]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210813204600-30853 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210813204600-30853 describe pod : exit status 1 (66.966913ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210813204600-30853 describe pod : exit status 1
--- FAIL: TestPause/serial/VerifyStatus (2.34s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (9.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210813204600-30853 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210813204600-30853 --alsologtostderr -v=5: exit status 80 (5.837288934s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210813204600-30853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:49:23.253749    3923 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:49:23.253882    3923 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:23.253896    3923 out.go:311] Setting ErrFile to fd 2...
	I0813 20:49:23.253903    3923 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:23.254073    3923 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:49:23.254318    3923 out.go:305] Setting JSON to false
	I0813 20:49:23.254351    3923 mustload.go:65] Loading cluster: pause-20210813204600-30853
	I0813 20:49:23.254741    3923 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:23.255319    3923 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:23.255391    3923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:23.266578    3923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44173
	I0813 20:49:23.267060    3923 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:23.267661    3923 main.go:130] libmachine: Using API Version  1
	I0813 20:49:23.267680    3923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:23.268030    3923 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:23.268256    3923 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:23.272004    3923 host.go:66] Checking if "pause-20210813204600-30853" exists ...
	I0813 20:49:23.272503    3923 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:23.272577    3923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:23.297222    3923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0813 20:49:23.297799    3923 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:23.298392    3923 main.go:130] libmachine: Using API Version  1
	I0813 20:49:23.298419    3923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:23.298948    3923 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:23.299189    3923 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:23.300019    3923 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210813204600-30853 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:49:23.302787    3923 out.go:177] * Pausing node pause-20210813204600-30853 ... 
	I0813 20:49:23.302811    3923 host.go:66] Checking if "pause-20210813204600-30853" exists ...
	I0813 20:49:23.303255    3923 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:23.303309    3923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:23.317807    3923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0813 20:49:23.318241    3923 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:23.318799    3923 main.go:130] libmachine: Using API Version  1
	I0813 20:49:23.318819    3923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:23.319213    3923 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:23.319405    3923 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:23.319635    3923 ssh_runner.go:149] Run: systemctl --version
	I0813 20:49:23.319663    3923 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:23.326162    3923 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:23.326642    3923 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:23.326669    3923 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:23.326910    3923 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:23.327098    3923 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:23.327318    3923 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:23.327490    3923 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:23.464925    3923 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:23.475228    3923 pause.go:50] kubelet running: true
	I0813 20:49:23.475290    3923 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:49:28.747257    3923 ssh_runner.go:189] Completed: sudo systemctl disable --now kubelet: (5.271939903s)
	I0813 20:49:28.747343    3923 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:49:28.747413    3923 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:49:28.894918    3923 cri.go:76] found id: "10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5"
	I0813 20:49:28.894950    3923 cri.go:76] found id: "d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6"
	I0813 20:49:28.894957    3923 cri.go:76] found id: "2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164"
	I0813 20:49:28.894972    3923 cri.go:76] found id: "ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf"
	I0813 20:49:28.894978    3923 cri.go:76] found id: "66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf"
	I0813 20:49:28.894983    3923 cri.go:76] found id: "83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659"
	I0813 20:49:28.894988    3923 cri.go:76] found id: "82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b"
	I0813 20:49:28.894994    3923 cri.go:76] found id: ""
	I0813 20:49:28.895052    3923 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:49:28.954406    3923 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5","pid":4257,"status":"running","bundle":"/run/containers/storage/overlay-containers/10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5/userdata","rootfs":"/var/lib/containers/storage/overlay/a7079e689c0a4e4d71832ec264022bf461f0ce8ad4ce2b3108ed136791be2f03/merged","created":"2021-08-13T20:49:13.000895625Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"739bee08","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"739bee08\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.te
rminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:49:12.875097095Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provi
sioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a7079e689c0a4e4d71832ec264022bf461f0ce8ad4ce2b3108ed136791be2f03/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/
etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/containers/storage-provisioner/3a59d7be\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/volumes/kubernetes.io~projected/kube-api-access-8s2qn\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-
minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:49:12.082823120Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","pid":4225,"status":"running","bundle":"/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata","rootfs":"/var/lib/containers/storage/overlay/b371eb6a701d211019f02265e2b7e86f1082a1d6de3736aec82972dd30ae9cc7/merged","created":"2021-08-13T20:49:12.571539079Z","annotations":{"addonmanager.kubernetes.io/mode"
:"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:49:12.082823120Z\",\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"v
olumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-podaa2d90d0_7a2e_40cf_b9ac_81fb9e2c1e76.slice","io.kubernetes.cri-o.ContainerID":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:49:12.45648335Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.p
od.name\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\",\"io.kubernetes.pod.uid\":\"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b371eb6a701d211019f02265e2b7e86f1082a1d6de3736aec82972dd30ae9cc7/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kub
ernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provis
ioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:49:12.082823120Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","pid":3260,"status":"running","bundle":"/run/containers/storage/overlay-containers/2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164/userdata","rootfs":"/var/lib/containers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","created":"2021-08-13T20:48:25.650799846Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.ha
sh":"7bfe6d1f","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7bfe6d1f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.433420822Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","i
o.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-4n8kb_kube-system_7c8a
1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/containers/kube-proxy/b214a802\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f
97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~projected/kube-api-access-qrwsr\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","pid":2560,"status":"running","bundle":"/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata","rootfs":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","created":"2021-08-13T20:47:58.451921584Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o",
"io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170563888Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podace00bb4fb8a8a9569ff7dae47e01d30.slice","io.kubernetes.cri-o.ContainerID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.734913609Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes
.cri-o.KubeName":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30853_ace00bb4fb8a8a9569ff7dae47e01d30/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210813204600-30853\",\"uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernete
s.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/con
fig.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","pid":3063,"status":"running","bundle":"/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata","rootfs":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe937ca4df37/merged","created":"2021-08-13T20:48:24.164151322Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.030706859Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod7c8a1bad_1f97_44ad_a3e3_fb9d52cfd0d9.slice","io.kubernetes.cri-o.ContainerID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-
o.ContainerName":"k8s_POD_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.009794742Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-4n8kb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"7cdcb64568\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d
9/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-4n8kb\",\"uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe937ca4df37/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default
","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/shm","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","pid":2712,"status":"running","bundle":"/run/containers/storage/overlay-containers/66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf/userdata","rootfs":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","created":"2021-08-13T20:48:00.371988051Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.
hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.05184871Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf24544
28a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30853_ace00bb4fb8a8a9569ff7dae47e01d30/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b2380
25a67ffbc7ea","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/containers/kube-scheduler/1a90a935\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb
8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","pid":2531,"status":"running","bundle":"/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata","rootfs":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","created":"2021-08-13T20:47:58.134632094Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b586eaff819d4c98a938914befbf359d\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170560054Z\"}","io.kubernetes.cri-o
.CgroupParent":"kubepods-burstable-podb586eaff819d4c98a938914befbf359d.slice","io.kubernetes.cri-o.ContainerID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.58849323Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.k
ubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210813204600-30853\",\"uid\":\"b586eaff819d4c98a938914befbf359d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-
o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad436
0b2447402da7e271","pid":3202,"status":"running","bundle":"/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata","rootfs":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","created":"2021-08-13T20:48:25.02088557Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.684666458Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth769c0295\",\"mac\":\"0e:7f:8d:fd:2a:c5\"},{\"name\":\"eth0\",\"mac\":\"46:39:40:9e:ad:d7\",\"sandbox\":\"/var/run/netns/70e99836-e661-4e4f-bfb4-1e8d94b25ad2\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1
\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod72033717_35d7_4397_b3c5_28028e7270f3.slice","io.kubernetes.cri-o.ContainerID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.356545063Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4g
rvm\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-4grvm\",\"uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b24
47402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","pid":2593,"status":"running","bundle":"/run/containers/storage/overlay-containers/82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b/use
rdata","rootfs":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","created":"2021-08-13T20:47:59.106710832Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"46519583","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"46519583\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:58.700311118Z","io.kubernete
s.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubern
etes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/containers/kube-apiserver/d05226bf\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\"
,\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","pid":2654,"status":"running","bundle":"/run/containers/storage/overlay-containers/83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659/us
erdata","rootfs":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","created":"2021-08-13T20:47:59.879440634Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dfe11a","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dfe11a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:59.302380713Z","io.kub
ernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-2
0210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/containers/kube-controller-manager/3fd07eff\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/control
ler-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed
'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata","rootfs":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","created":"2021-08-13T20:48:24.985669139Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.664842879Z\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth8015c076\",\"mac\":\"b6:65:b6:ec:41:c5\"},{\"name\":\"eth0\",\"mac\":\"e2:c2:94:2c:86:54\",\"sandbox\":\"/var/run/netns/18863c2e-48ba-4850-8146-8e155524b6dd\"}],\"
ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.3/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod21759cc2_1fdb_417f_bc71_01fb6f9d0c35.slice","io.kubernetes.cri-o.ContainerID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-417f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.319998358Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.Labels":"{\"io.kuber
netes.pod.uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-gm2bv\",\"k8s-app\":\"kube-dns\",\"pod-template-hash\":\"558bd4d5db\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-gm2bv_21759cc2-1fdb-417f-bc71-01fb6f9d0c35/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-gm2bv\",\"uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-417f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.Privi
legedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-gm2bv","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"21759cc2-1fdb-417f-bc71-01fb6f9d0c35","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.664842879Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36d
cf","pid":2754,"status":"running","bundle":"/run/containers/storage/overlay-containers/ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf/userdata","rootfs":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","created":"2021-08-13T20:48:00.893103098Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5d26fc81","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5d26fc81\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2b
a567752668c63d36dcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.424653769Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-2021081320460
0-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/containers/etcd/7df814d9\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"reado
nly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d22eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","pid":3295,"status":"running","bundle":"/run/containers/storage/overlay-containers/d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6/userdata","rootfs":"/var/lib/containers/storage/overlay/6c5d
d04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","created":"2021-08-13T20:48:25.853932123Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"861ab352","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"861ab352\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.k
ubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.56946163Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grvm\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\"}","io.kubernetes.cri-o.LogPath":
"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6c5dd04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\
",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/containers/coredns/baf35c8d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~projected/kube-api-access-zsj85\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inact
ive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","pid":2552,"status":"running","bundle":"/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata","rootfs":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","created":"2021-08-13T20:47:58.569818878Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.61:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170566946Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod545d21e989d5ed3752d22eeb8bd8ffce.slice","io.kubernetes.cri-o.
ContainerID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.638411495Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"etcd\"}","io.kubernetes.cri-o.LogPath
":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210813204600-30853\",\"uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e99200313
30011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d22eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","pid":2497,"status":"running","bundle":"/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/user
data","rootfs":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","created":"2021-08-13T20:47:57.759478731Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170508472Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"cb76671b6b79a1d552449a94a3dbfa98\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.61:8443\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice","io.kubernetes.cri-o.ContainerID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.128395566Z"
,"io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210813204600-30853\",\"uid\":\"cb76671b6b7
9a1d552449a94a3dbfa98\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48b
a3029/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0813 20:49:28.955233    3923 cri.go:113] list returned 15 containers
	I0813 20:49:28.955251    3923 cri.go:116] container: {ID:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5 Status:running}
	I0813 20:49:28.955263    3923 cri.go:116] container: {ID:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e Status:running}
	I0813 20:49:28.955268    3923 cri.go:118] skipping 2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e - not in ps
	I0813 20:49:28.955272    3923 cri.go:116] container: {ID:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 Status:running}
	I0813 20:49:28.955281    3923 cri.go:116] container: {ID:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea Status:running}
	I0813 20:49:28.955285    3923 cri.go:118] skipping 55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea - not in ps
	I0813 20:49:28.955289    3923 cri.go:116] container: {ID:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 Status:running}
	I0813 20:49:28.955293    3923 cri.go:118] skipping 564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 - not in ps
	I0813 20:49:28.955296    3923 cri.go:116] container: {ID:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf Status:running}
	I0813 20:49:28.955301    3923 cri.go:116] container: {ID:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 Status:running}
	I0813 20:49:28.955306    3923 cri.go:118] skipping 6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 - not in ps
	I0813 20:49:28.955309    3923 cri.go:116] container: {ID:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 Status:running}
	I0813 20:49:28.955313    3923 cri.go:118] skipping 8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 - not in ps
	I0813 20:49:28.955317    3923 cri.go:116] container: {ID:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b Status:running}
	I0813 20:49:28.955320    3923 cri.go:116] container: {ID:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659 Status:running}
	I0813 20:49:28.955325    3923 cri.go:116] container: {ID:9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 Status:stopped}
	I0813 20:49:28.955329    3923 cri.go:118] skipping 9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 - not in ps
	I0813 20:49:28.955332    3923 cri.go:116] container: {ID:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf Status:running}
	I0813 20:49:28.955336    3923 cri.go:116] container: {ID:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6 Status:running}
	I0813 20:49:28.955342    3923 cri.go:116] container: {ID:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f Status:running}
	I0813 20:49:28.955346    3923 cri.go:118] skipping e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f - not in ps
	I0813 20:49:28.955353    3923 cri.go:116] container: {ID:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 Status:running}
	I0813 20:49:28.955361    3923 cri.go:118] skipping f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 - not in ps
	I0813 20:49:28.955401    3923 ssh_runner.go:149] Run: sudo runc pause 10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5
	I0813 20:49:28.981336    3923 ssh_runner.go:149] Run: sudo runc pause 10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5 2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164
	I0813 20:49:29.015417    3923 out.go:177] 
	W0813 20:49:29.015610    3923 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc pause 10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5 2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:49:29Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc pause 10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5 2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:49:29Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 20:49:29.015624    3923 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:49:29.029612    3923 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 20:49:29.031236    3923 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210813204600-30853 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813204600-30853 -n pause-20210813204600-30853
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813204600-30853 -n pause-20210813204600-30853: exit status 2 (287.852325ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813204600-30853 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p pause-20210813204600-30853 logs -n 25: (1.222225109s)
helpers_test.go:253: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                      | multinode-20210813202419-30853          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:16 UTC | Fri, 13 Aug 2021 20:39:18 UTC |
	|         | multinode-20210813202419-30853          |                                         |         |         |                               |                               |
	| start   | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:02 UTC | Fri, 13 Aug 2021 20:43:38 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | --wait=true --preload=false             |                                         |         |         |                               |                               |
	|         | --driver=kvm2                           |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0            |                                         |         |         |                               |                               |
	| ssh     | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:38 UTC | Fri, 13 Aug 2021 20:43:41 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | -- sudo crictl pull busybox             |                                         |         |         |                               |                               |
	| start   | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:41 UTC | Fri, 13 Aug 2021 20:44:22 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2          |                                         |         |         |                               |                               |
	|         |  --container-runtime=crio               |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3            |                                         |         |         |                               |                               |
	| ssh     | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:22 UTC | Fri, 13 Aug 2021 20:44:22 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | -- sudo crictl image ls                 |                                         |         |         |                               |                               |
	| -p      | test-preload-20210813204102-30853       | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:22 UTC | Fri, 13 Aug 2021 20:44:24 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| delete  | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:25 UTC | Fri, 13 Aug 2021 20:44:26 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	| start   | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:26 UTC | Fri, 13 Aug 2021 20:45:21 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2             |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| stop    | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:21 UTC | Fri, 13 Aug 2021 20:45:21 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --cancel-scheduled                      |                                         |         |         |                               |                               |
	| stop    | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:34 UTC | Fri, 13 Aug 2021 20:45:42 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --schedule 5s                           |                                         |         |         |                               |                               |
	| delete  | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:59 UTC | Fri, 13 Aug 2021 20:46:00 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	| start   | -p                                      | force-systemd-env-20210813204600-30853  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:47:02 UTC |
	|         | force-systemd-env-20210813204600-30853  |                                         |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | -v=5 --driver=kvm2                      |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| delete  | -p                                      | force-systemd-env-20210813204600-30853  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:02 UTC | Fri, 13 Aug 2021 20:47:03 UTC |
	|         | force-systemd-env-20210813204600-30853  |                                         |         |         |                               |                               |
	| delete  | -p                                      | kubenet-20210813204703-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:03 UTC | Fri, 13 Aug 2021 20:47:03 UTC |
	|         | kubenet-20210813204703-30853            |                                         |         |         |                               |                               |
	| delete  | -p false-20210813204703-30853           | false-20210813204703-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:04 UTC | Fri, 13 Aug 2021 20:47:04 UTC |
	| start   | -p                                      | kubernetes-upgrade-20210813204600-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:47:42 UTC |
	|         | kubernetes-upgrade-20210813204600-30853 |                                         |         |         |                               |                               |
	|         | --memory=2200                           |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0            |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2    |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| stop    | -p                                      | kubernetes-upgrade-20210813204600-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:42 UTC | Fri, 13 Aug 2021 20:47:44 UTC |
	|         | kubernetes-upgrade-20210813204600-30853 |                                         |         |         |                               |                               |
	| start   | -p                                      | offline-crio-20210813204600-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:48:55 UTC |
	|         | offline-crio-20210813204600-30853       |                                         |         |         |                               |                               |
	|         | --alsologtostderr                       |                                         |         |         |                               |                               |
	|         | -v=1 --memory=2048                      |                                         |         |         |                               |                               |
	|         | --wait=true --driver=kvm2               |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| delete  | -p                                      | offline-crio-20210813204600-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:55 UTC | Fri, 13 Aug 2021 20:48:57 UTC |
	|         | offline-crio-20210813204600-30853       |                                         |         |         |                               |                               |
	| start   | -p pause-20210813204600-30853           | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:49:06 UTC |
	|         | --memory=2048                           |                                         |         |         |                               |                               |
	|         | --install-addons=false                  |                                         |         |         |                               |                               |
	|         | --wait=all --driver=kvm2                |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| start   | -p pause-20210813204600-30853           | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:06 UTC | Fri, 13 Aug 2021 20:49:13 UTC |
	|         | --alsologtostderr                       |                                         |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                      |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| -p      | pause-20210813204600-30853              | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:16 UTC | Fri, 13 Aug 2021 20:49:17 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| -p      | pause-20210813204600-30853              | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:18 UTC | Fri, 13 Aug 2021 20:49:19 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| -p      | pause-20210813204600-30853              | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:20 UTC | Fri, 13 Aug 2021 20:49:21 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| unpause | -p pause-20210813204600-30853           | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:22 UTC | Fri, 13 Aug 2021 20:49:23 UTC |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                               |                               |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:49:06
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:49:06.750460    3412 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:49:06.750532    3412 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:06.750535    3412 out.go:311] Setting ErrFile to fd 2...
	I0813 20:49:06.750538    3412 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:06.750645    3412 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:49:06.750968    3412 out.go:305] Setting JSON to false
	I0813 20:49:06.794979    3412 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":9109,"bootTime":1628878638,"procs":188,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:49:06.795299    3412 start.go:121] virtualization: kvm guest
	I0813 20:49:06.798215    3412 out.go:177] * [pause-20210813204600-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:49:06.799922    3412 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:49:06.798386    3412 notify.go:169] Checking for updates...
	I0813 20:49:06.801691    3412 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:49:06.803336    3412 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:49:06.804849    3412 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:49:06.805220    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:06.805637    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.805697    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.817202    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35163
	I0813 20:49:06.817597    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.818173    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.818195    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.818649    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.818887    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.819077    3412 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:49:06.819425    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.819465    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.830844    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0813 20:49:06.831324    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.831848    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.831871    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.832233    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.832415    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.865593    3412 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 20:49:06.865627    3412 start.go:278] selected driver: kvm2
	I0813 20:49:06.865641    3412 start.go:751] validating driver "kvm2" against &{Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:06.865757    3412 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:49:06.866497    3412 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:49:06.866703    3412 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:49:06.878129    3412 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:49:06.878764    3412 cni.go:93] Creating CNI manager for ""
	I0813 20:49:06.878779    3412 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:49:06.878789    3412 start_flags.go:277] config:
	{Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:06.878936    3412 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:49:06.881128    3412 out.go:177] * Starting control plane node pause-20210813204600-30853 in cluster pause-20210813204600-30853
	I0813 20:49:06.881153    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:06.881197    3412 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:49:06.881216    3412 cache.go:56] Caching tarball of preloaded images
	I0813 20:49:06.881339    3412 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:49:06.881361    3412 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:49:06.881476    3412 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/config.json ...
	I0813 20:49:06.881656    3412 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:49:06.881687    3412 start.go:313] acquiring machines lock for pause-20210813204600-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:49:06.881775    3412 start.go:317] acquired machines lock for "pause-20210813204600-30853" in 71.324µs
	I0813 20:49:06.881794    3412 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:49:06.881801    3412 fix.go:55] fixHost starting: 
	I0813 20:49:06.882135    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.882177    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.894411    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0813 20:49:06.894958    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.895630    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.895652    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.896024    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.896206    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.896395    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:06.899827    3412 fix.go:108] recreateIfNeeded on pause-20210813204600-30853: state=Running err=<nil>
	W0813 20:49:06.899844    3412 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:49:05.079802    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:06.902070    3412 out.go:177] * Updating the running kvm2 "pause-20210813204600-30853" VM ...
	I0813 20:49:06.902100    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.902283    3412 machine.go:88] provisioning docker machine ...
	I0813 20:49:06.902305    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.902430    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:06.902571    3412 buildroot.go:166] provisioning hostname "pause-20210813204600-30853"
	I0813 20:49:06.902599    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:06.902737    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:06.908023    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:06.908395    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:06.908431    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:06.908509    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:06.908703    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:06.908861    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:06.908990    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:06.909175    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:06.909381    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:06.909399    3412 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210813204600-30853 && echo "pause-20210813204600-30853" | sudo tee /etc/hostname
	I0813 20:49:07.062168    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210813204600-30853
	
	I0813 20:49:07.062210    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.068189    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.068544    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.068577    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.068759    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.068953    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.069117    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.069259    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.069439    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:07.069612    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:07.069649    3412 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210813204600-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210813204600-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210813204600-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:49:07.221530    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:49:07.221612    3412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:49:07.221648    3412 buildroot.go:174] setting up certificates
	I0813 20:49:07.221660    3412 provision.go:83] configureAuth start
	I0813 20:49:07.221672    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:07.221918    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:07.227471    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.227839    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.227868    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.228085    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.232869    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.233213    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.233251    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.233347    3412 provision.go:138] copyHostCerts
	I0813 20:49:07.233436    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:49:07.233450    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:49:07.233511    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:49:07.233650    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:49:07.233667    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:49:07.233695    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:49:07.233774    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:49:07.233784    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:49:07.233812    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:49:07.233859    3412 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.pause-20210813204600-30853 san=[192.168.39.61 192.168.39.61 localhost 127.0.0.1 minikube pause-20210813204600-30853]
	I0813 20:49:07.320299    3412 provision.go:172] copyRemoteCerts
	I0813 20:49:07.320390    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:49:07.320428    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.325783    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.326112    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.326152    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.326310    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.326478    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.326610    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.326733    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:07.427180    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:49:07.450672    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0813 20:49:07.471272    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:49:07.489660    3412 provision.go:86] duration metric: configureAuth took 267.984336ms
	I0813 20:49:07.489686    3412 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:49:07.489862    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:07.489982    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.495300    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.495618    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.495653    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.495797    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.495985    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.496150    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.496279    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.496434    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:07.496609    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:07.496631    3412 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:49:08.602797    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:49:08.602830    3412 machine.go:91] provisioned docker machine in 1.700528876s
	I0813 20:49:08.602841    3412 start.go:267] post-start starting for "pause-20210813204600-30853" (driver="kvm2")
	I0813 20:49:08.602846    3412 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:49:08.602880    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.603196    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:49:08.603247    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.608420    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.608704    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.608735    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.608875    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.609064    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.609198    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.609343    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:08.709733    3412 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:49:08.715709    3412 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:49:08.715731    3412 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:49:08.715792    3412 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:49:08.715871    3412 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 20:49:08.715956    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:49:08.724293    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:49:08.750217    3412 start.go:270] post-start completed in 147.362269ms
	I0813 20:49:08.750260    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.750492    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.756215    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.756621    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.756650    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.756812    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.757034    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.757170    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.757300    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.757480    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:08.757670    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:08.757683    3412 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 20:49:08.900897    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628887748.901369788
	
	I0813 20:49:08.900932    3412 fix.go:212] guest clock: 1628887748.901369788
	I0813 20:49:08.900944    3412 fix.go:225] Guest: 2021-08-13 20:49:08.901369788 +0000 UTC Remote: 2021-08-13 20:49:08.750472863 +0000 UTC m=+2.052052145 (delta=150.896925ms)
	I0813 20:49:08.900988    3412 fix.go:196] guest clock delta is within tolerance: 150.896925ms
	I0813 20:49:08.900996    3412 fix.go:57] fixHost completed within 2.019194265s
	I0813 20:49:08.901002    3412 start.go:80] releasing machines lock for "pause-20210813204600-30853", held for 2.019216553s
	I0813 20:49:08.901046    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.901309    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:08.906817    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.907191    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.907257    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.907379    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.907574    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.908140    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.908391    3412 ssh_runner.go:149] Run: systemctl --version
	I0813 20:49:08.908418    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.908488    3412 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:49:08.908539    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.915229    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.915547    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.915580    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.915727    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.915920    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.916011    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.916080    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.916237    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:08.916429    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.916461    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.916636    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.916784    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.917107    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.917257    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:09.014176    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:09.014353    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:09.061257    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:09.061287    3412 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:49:09.061352    3412 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:49:09.075880    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:49:09.086949    3412 docker.go:153] disabling docker service ...
	I0813 20:49:09.087012    3412 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:49:09.103245    3412 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:49:09.117178    3412 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:49:09.373507    3412 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:49:09.585738    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:49:09.599794    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:49:09.615240    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:49:09.623727    3412 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:49:09.630919    3412 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:49:09.637747    3412 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:49:09.808564    3412 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:49:09.952030    3412 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:49:09.952144    3412 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:49:09.959400    3412 start.go:413] Will wait 60s for crictl version
	I0813 20:49:09.959452    3412 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:49:09.991124    3412 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 20:49:09.991251    3412 ssh_runner.go:149] Run: crio --version
	I0813 20:49:10.280528    3412 ssh_runner.go:149] Run: crio --version
	I0813 20:49:10.528655    3412 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 20:49:10.528694    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:10.534359    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:10.534782    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:10.534815    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:10.535076    3412 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 20:49:10.539953    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:10.540017    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:10.583397    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:10.583419    3412 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:49:10.583459    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:10.620617    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:10.620642    3412 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:49:10.620703    3412 ssh_runner.go:149] Run: crio config
	I0813 20:49:10.896405    3412 cni.go:93] Creating CNI manager for ""
	I0813 20:49:10.896427    3412 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:49:10.896436    3412 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:49:10.896448    3412 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.61 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210813204600-30853 NodeName:pause-20210813204600-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.61 CgroupDriver:systemd ClientCAFile:/var/lib/m
inikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:49:10.896629    3412 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "pause-20210813204600-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:49:10.896754    3412 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=pause-20210813204600-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.61 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:49:10.896819    3412 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:49:10.911638    3412 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:49:10.911723    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:49:10.920269    3412 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (506 bytes)
	I0813 20:49:10.933623    3412 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:49:10.945877    3412 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I0813 20:49:10.958716    3412 ssh_runner.go:149] Run: grep 192.168.39.61	control-plane.minikube.internal$ /etc/hosts
	I0813 20:49:10.962845    3412 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853 for IP: 192.168.39.61
	I0813 20:49:10.962912    3412 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:49:10.962936    3412 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:49:10.963041    3412 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.key
	I0813 20:49:10.963067    3412 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.key.e9ce627b
	I0813 20:49:10.963088    3412 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.key
	I0813 20:49:10.963223    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 20:49:10.963274    3412 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 20:49:10.963290    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:49:10.963332    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:49:10.963362    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:49:10.963395    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:49:10.963481    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:49:10.964763    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:49:10.996208    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:49:11.015193    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:49:11.032382    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:49:11.050461    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:49:11.067415    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:49:11.085267    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:49:11.102588    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:49:11.128113    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:49:11.146008    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 20:49:11.162723    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 20:49:11.181637    3412 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:49:11.193799    3412 ssh_runner.go:149] Run: openssl version
	I0813 20:49:11.199783    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 20:49:11.209928    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.214459    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.214508    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.221207    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:49:11.229476    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:49:11.237550    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.245454    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.245501    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.251754    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:49:11.258461    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 20:49:11.267146    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.271736    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.271779    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.278000    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 20:49:11.284415    3412 kubeadm.go:390] StartCluster: {Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clu
sterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:11.284518    3412 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:49:11.284561    3412 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:49:11.324305    3412 cri.go:76] found id: "d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6"
	I0813 20:49:11.324324    3412 cri.go:76] found id: "2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164"
	I0813 20:49:11.324329    3412 cri.go:76] found id: "ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf"
	I0813 20:49:11.324336    3412 cri.go:76] found id: "66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf"
	I0813 20:49:11.324339    3412 cri.go:76] found id: "83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659"
	I0813 20:49:11.324343    3412 cri.go:76] found id: "82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b"
	I0813 20:49:11.324347    3412 cri.go:76] found id: ""
	I0813 20:49:11.324383    3412 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:49:11.370394    3412 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","pid":3260,"status":"running","bundle":"/run/containers/storage/overlay-containers/2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164/userdata","rootfs":"/var/lib/containers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","created":"2021-08-13T20:48:25.650799846Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7bfe6d1f","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7bfe6d1f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termination
MessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.433420822Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/c
ontainers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet
/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/containers/kube-proxy/b214a802\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~projected/kube-api-access-qrwsr\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.prop
erty.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","pid":2560,"status":"running","bundle":"/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata","rootfs":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","created":"2021-08-13T20:47:58.451921584Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170563888Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podace00bb4fb8a8a9569ff7dae47e01d30.slice","io.kubernetes.cri-o.ContainerID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.Contai
nerName":"k8s_POD_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.734913609Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30
853_ace00bb4fb8a8a9569ff7dae47e01d30/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210813204600-30853\",\"uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c3
9b238025a67ffbc7ea","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","pid":3063,"status":"running","bundle":"/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata","rootfs":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe9
37ca4df37/merged","created":"2021-08-13T20:48:24.164151322Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.030706859Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod7c8a1bad_1f97_44ad_a3e3_fb9d52cfd0d9.slice","io.kubernetes.cri-o.ContainerID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.009794742Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/hostname","i
o.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-4n8kb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"7cdcb64568\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-4n8kb\",\"uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe937ca4df37/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes
.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/shm","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactiv
e-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","pid":2712,"status":"running","bundle":"/run/containers/storage/overlay-containers/66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf/userdata","rootfs":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","created":"2021-08-13T20:48:00.371988051Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.05184871Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30853_ace00bb4fb8a8a9569ff7dae47e01d30/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"ku
be-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/etc-hosts\",\"readonly\":false},{\"cont
ainer_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/containers/kube-scheduler/1a90a935\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","pid":2531,"status":"running","bundle":"/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af7
8ce2fb71d82b52d87fa45aaf3/userdata","rootfs":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","created":"2021-08-13T20:47:58.134632094Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b586eaff819d4c98a938914befbf359d\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170560054Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podb586eaff819d4c98a938914befbf359d.slice","io.kubernetes.cri-o.ContainerID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.58849323Z","io.kubernetes.cri-o.HostName":"pause-20210
813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210813204600-30853\",\"uid\":\"b586eaff81
9d4c98a938914befbf359d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d8
2b52d87fa45aaf3/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","pid":3202,"status":"running","bundle":"/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata","rootfs":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","created":"2021-08-13T20:48:25.02088557Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/co
nfig.seen\":\"2021-08-13T20:48:23.684666458Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth769c0295\",\"mac\":\"0e:7f:8d:fd:2a:c5\"},{\"name\":\"eth0\",\"mac\":\"46:39:40:9e:ad:d7\",\"sandbox\":\"/var/run/netns/70e99836-e661-4e4f-bfb4-1e8d94b25ad2\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod72033717_35d7_4397_b3c5_28028e7270f3.slice","io.kubernetes.cri-o.ContainerID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.356545063Z","io.kubernetes.cri-o.H
ostName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grvm\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-4grvm\",\"uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.M
ountPoint":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"k
ube-system","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","pid":2593,"status":"running","bundle":"/run/containers/storage/overlay-containers/82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b/userdata","rootfs":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","created":"2021-08-13T20:47:59.106710832Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"46519583","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"Fi
le","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"46519583\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:58.700311118Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.ui
d\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kube
rnetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/containers/kube-apiserver/d05226bf\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.
61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","pid":2654,"status":"running","bundle":"/run/containers/storage/overlay-containers/83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659/userdata","rootfs":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","created":"2021-08-13T20:47:59.879440634Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dfe11a","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePol
icy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dfe11a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:59.302380713Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kub
e-system\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","
io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/containers/kube-controller-manager/3fd07eff\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/v
olume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata","rootfs":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","created":"2021-08-13T20:
48:24.985669139Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.664842879Z\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth8015c076\",\"mac\":\"b6:65:b6:ec:41:c5\"},{\"name\":\"eth0\",\"mac\":\"e2:c2:94:2c:86:54\",\"sandbox\":\"/var/run/netns/18863c2e-48ba-4850-8146-8e155524b6dd\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.3/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod21759cc2_1fdb_417f_bc71_01fb6f9d0c35.slice","io.kubernetes.cri-o.ContainerID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-41
7f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.319998358Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-gm2bv\",\"k8s-app\":\"kube-dns\",\"pod-template-hash\":\"558bd4d5db\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-gm2bv_21759cc2-1fdb-417f-bc71-01fb6f9d0c35/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540.log","io.kubernetes.cri-
o.Metadata":"{\"name\":\"coredns-558bd4d5db-gm2bv\",\"uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-417f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9f35d968
848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-gm2bv","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"21759cc2-1fdb-417f-bc71-01fb6f9d0c35","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.664842879Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf","pid":2754,"status":"running","bundle":"/run/containers/storage/overlay-containers/ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf/userdata","rootfs":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","created":"2021-08-13T20:48:00.893103098Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5d26fc81","io.kubernetes.container.name":"etcd","io.kubernetes.container
.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5d26fc81\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.424653769Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.p
od.name\":\"etcd-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.SeccompProf
ilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/containers/etcd/7df814d9\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d2
2eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","pid":3295,"status":"running","bundle":"/run/containers/storage/overlay-containers/d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6/userdata","rootfs":"/var/lib/containers/storage/overlay/6c5dd04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","created":"2021-08-13T20:48:25.853932123Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"861ab352","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.
kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"861ab352\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.56946163Z","io.kubernetes.cri-o.IP.0":"10
.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grvm\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6c5dd04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/
storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/containers/coredns/baf35c8d\",\"readonly\":false},{\"container_path\
":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~projected/kube-api-access-zsj85\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","pid":2552,"status":"running","bundle":"/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata","rootfs":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","created"
:"2021-08-13T20:47:58.569818878Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.61:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170566946Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod545d21e989d5ed3752d22eeb8bd8ffce.slice","io.kubernetes.cri-o.ContainerID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.638411495Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/sto
rage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"etcd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210813204600-30853\",\"uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","io.kubernet
es.cri-o.Name":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d22eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","pid":2497,"status":"running","bundle":"/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata","rootfs":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","created":"2021-08-13T20:47:57.759478731Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170508472Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"cb76671b6b79a1d55244
9a94a3dbfa98\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.61:8443\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice","io.kubernetes.cri-o.ContainerID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.128395566Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",
\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210813204600-30853\",\"uid\":\"cb76671b6b79a1d552449a94a3dbfa98\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]",
"io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode"
:"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0813 20:49:11.370977    3412 cri.go:113] list returned 13 containers
	I0813 20:49:11.370992    3412 cri.go:116] container: {ID:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 Status:running}
	I0813 20:49:11.371004    3412 cri.go:122] skipping {2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 running}: state = "running", want "paused"
	I0813 20:49:11.371014    3412 cri.go:116] container: {ID:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea Status:running}
	I0813 20:49:11.371019    3412 cri.go:118] skipping 55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea - not in ps
	I0813 20:49:11.371023    3412 cri.go:116] container: {ID:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 Status:running}
	I0813 20:49:11.371028    3412 cri.go:118] skipping 564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 - not in ps
	I0813 20:49:11.371034    3412 cri.go:116] container: {ID:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf Status:running}
	I0813 20:49:11.371040    3412 cri.go:122] skipping {66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf running}: state = "running", want "paused"
	I0813 20:49:11.371048    3412 cri.go:116] container: {ID:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 Status:running}
	I0813 20:49:11.371054    3412 cri.go:118] skipping 6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 - not in ps
	I0813 20:49:11.371063    3412 cri.go:116] container: {ID:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 Status:running}
	I0813 20:49:11.371069    3412 cri.go:118] skipping 8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 - not in ps
	I0813 20:49:11.371076    3412 cri.go:116] container: {ID:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b Status:running}
	I0813 20:49:11.371081    3412 cri.go:122] skipping {82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b running}: state = "running", want "paused"
	I0813 20:49:11.371087    3412 cri.go:116] container: {ID:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659 Status:running}
	I0813 20:49:11.371091    3412 cri.go:122] skipping {83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659 running}: state = "running", want "paused"
	I0813 20:49:11.371099    3412 cri.go:116] container: {ID:9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 Status:stopped}
	I0813 20:49:11.371105    3412 cri.go:118] skipping 9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 - not in ps
	I0813 20:49:11.371110    3412 cri.go:116] container: {ID:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf Status:running}
	I0813 20:49:11.371115    3412 cri.go:122] skipping {ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf running}: state = "running", want "paused"
	I0813 20:49:11.371119    3412 cri.go:116] container: {ID:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6 Status:running}
	I0813 20:49:11.371127    3412 cri.go:122] skipping {d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6 running}: state = "running", want "paused"
	I0813 20:49:11.371135    3412 cri.go:116] container: {ID:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f Status:running}
	I0813 20:49:11.371144    3412 cri.go:118] skipping e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f - not in ps
	I0813 20:49:11.371154    3412 cri.go:116] container: {ID:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 Status:running}
	I0813 20:49:11.371164    3412 cri.go:118] skipping f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 - not in ps
	I0813 20:49:11.371203    3412 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:49:11.379585    3412 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:49:11.379610    3412 kubeadm.go:600] restartCluster start
	I0813 20:49:11.379656    3412 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:49:11.387273    3412 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:49:11.388131    3412 kubeconfig.go:93] found "pause-20210813204600-30853" server: "https://192.168.39.61:8443"
	I0813 20:49:11.389906    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.391540    3412 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:49:11.398645    3412 api_server.go:164] Checking apiserver status ...
	I0813 20:49:11.398727    3412 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:49:11.410339    3412 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2593/cgroup
	I0813 20:49:11.416825    3412 api_server.go:180] apiserver freezer: "11:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice/crio-82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b.scope"
	I0813 20:49:11.416874    3412 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice/crio-82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b.scope/freezer.state
	I0813 20:49:11.424153    3412 api_server.go:202] freezer state: "THAWED"
	I0813 20:49:11.424172    3412 api_server.go:239] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0813 20:49:11.430386    3412 api_server.go:265] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0813 20:49:11.447400    3412 system_pods.go:86] 6 kube-system pods found
	I0813 20:49:11.447439    3412 system_pods.go:89] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:11.447446    3412 system_pods.go:89] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:11.447453    3412 system_pods.go:89] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:11.447457    3412 system_pods.go:89] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:11.447460    3412 system_pods.go:89] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:11.447465    3412 system_pods.go:89] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:11.448566    3412 api_server.go:139] control plane version: v1.21.3
	I0813 20:49:11.448586    3412 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.39.61
	I0813 20:49:11.448597    3412 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0813 20:49:11.448603    3412 kubeadm.go:604] restartCluster took 68.987456ms
	I0813 20:49:11.448610    3412 kubeadm.go:392] StartCluster complete in 164.201481ms
	I0813 20:49:11.448627    3412 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:49:11.448743    3412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:49:11.449587    3412 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:49:11.450509    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.454641    3412 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210813204600-30853" rescaled to 1
	I0813 20:49:11.454698    3412 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:49:11.454707    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:49:11.456952    3412 out.go:177] * Verifying Kubernetes components...
	I0813 20:49:11.457008    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:11.454754    3412 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:49:11.457069    3412 addons.go:59] Setting storage-provisioner=true in profile "pause-20210813204600-30853"
	I0813 20:49:11.455000    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:11.457090    3412 addons.go:135] Setting addon storage-provisioner=true in "pause-20210813204600-30853"
	W0813 20:49:11.457098    3412 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:49:11.457112    3412 addons.go:59] Setting default-storageclass=true in profile "pause-20210813204600-30853"
	I0813 20:49:11.457130    3412 host.go:66] Checking if "pause-20210813204600-30853" exists ...
	I0813 20:49:11.457136    3412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210813204600-30853"
	I0813 20:49:11.457449    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.457490    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.457642    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.457688    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.468728    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0813 20:49:11.469146    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.469685    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.469705    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.470063    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.470584    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.470626    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.476732    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0813 20:49:11.477171    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.477677    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.477701    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.478079    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.478277    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.482479    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.483740    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45299
	I0813 20:49:11.484114    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.484536    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.484555    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.484941    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.485097    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.487884    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:11.490267    3412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:49:11.488882    3412 addons.go:135] Setting addon default-storageclass=true in "pause-20210813204600-30853"
	W0813 20:49:11.490289    3412 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:49:11.490323    3412 host.go:66] Checking if "pause-20210813204600-30853" exists ...
	I0813 20:49:11.490374    3412 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:49:11.490389    3412 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:49:11.490406    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:11.490689    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.490728    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.496655    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.497065    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:11.497093    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.497244    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:11.497423    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:11.497618    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:11.497767    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:11.503422    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34471
	I0813 20:49:11.503821    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.504277    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.504306    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.504582    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.505173    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.505219    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.518799    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36859
	I0813 20:49:11.519214    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.519629    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.519655    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.519995    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.520180    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.523435    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:11.523650    3412 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:49:11.523666    3412 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:49:11.523682    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:11.529028    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.529396    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:11.529423    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.529571    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:11.529736    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:11.529865    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:11.530004    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:11.605965    3412 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:49:11.606090    3412 node_ready.go:35] waiting up to 6m0s for node "pause-20210813204600-30853" to be "Ready" ...
	I0813 20:49:11.610421    3412 node_ready.go:49] node "pause-20210813204600-30853" has status "Ready":"True"
	I0813 20:49:11.610442    3412 node_ready.go:38] duration metric: took 4.320432ms waiting for node "pause-20210813204600-30853" to be "Ready" ...
	I0813 20:49:11.610453    3412 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:49:11.616546    3412 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:49:11.616740    3412 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.631733    3412 pod_ready.go:92] pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.631757    3412 pod_ready.go:81] duration metric: took 14.992576ms waiting for pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.631771    3412 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.639091    3412 pod_ready.go:92] pod "etcd-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.639117    3412 pod_ready.go:81] duration metric: took 7.33748ms waiting for pod "etcd-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.639129    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.645487    3412 pod_ready.go:92] pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.645508    3412 pod_ready.go:81] duration metric: took 6.370538ms waiting for pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.645519    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.652583    3412 pod_ready.go:92] pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.652602    3412 pod_ready.go:81] duration metric: took 7.073719ms waiting for pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.652614    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4n8kb" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.658710    3412 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:49:12.038755    3412 pod_ready.go:92] pod "kube-proxy-4n8kb" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:12.038776    3412 pod_ready.go:81] duration metric: took 386.155583ms waiting for pod "kube-proxy-4n8kb" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.038787    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.069005    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069032    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069056    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069036    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069332    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069333    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069336    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069348    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069357    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069364    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069368    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069371    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069377    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069380    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069631    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069649    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069664    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069635    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069693    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069706    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069717    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069914    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069931    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.071889    3412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:49:12.071910    3412 addons.go:344] enableAddons completed in 617.161828ms
	I0813 20:49:12.434704    3412 pod_ready.go:92] pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:12.434726    3412 pod_ready.go:81] duration metric: took 395.931948ms waiting for pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.434734    3412 pod_ready.go:38] duration metric: took 824.269103ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:49:12.434752    3412 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:49:12.434790    3412 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:49:12.451457    3412 api_server.go:70] duration metric: took 996.725767ms to wait for apiserver process to appear ...
	I0813 20:49:12.451487    3412 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:49:12.451500    3412 api_server.go:239] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0813 20:49:12.457776    3412 api_server.go:265] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0813 20:49:12.458697    3412 api_server.go:139] control plane version: v1.21.3
	I0813 20:49:12.458716    3412 api_server.go:129] duration metric: took 7.221294ms to wait for apiserver health ...
	I0813 20:49:12.458726    3412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:49:12.637203    3412 system_pods.go:59] 7 kube-system pods found
	I0813 20:49:12.637240    3412 system_pods.go:61] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:12.637248    3412 system_pods.go:61] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:12.637254    3412 system_pods.go:61] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:12.637261    3412 system_pods.go:61] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:12.637266    3412 system_pods.go:61] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:12.637272    3412 system_pods.go:61] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:12.637281    3412 system_pods.go:61] "storage-provisioner" [aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:49:12.637290    3412 system_pods.go:74] duration metric: took 178.557519ms to wait for pod list to return data ...
	I0813 20:49:12.637299    3412 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:49:12.841324    3412 default_sa.go:45] found service account: "default"
	I0813 20:49:12.841350    3412 default_sa.go:55] duration metric: took 204.040505ms for default service account to be created ...
	I0813 20:49:12.841359    3412 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:49:13.042158    3412 system_pods.go:86] 7 kube-system pods found
	I0813 20:49:13.042205    3412 system_pods.go:89] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:13.042216    3412 system_pods.go:89] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:13.042224    3412 system_pods.go:89] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:13.042237    3412 system_pods.go:89] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:13.042245    3412 system_pods.go:89] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:13.042257    3412 system_pods.go:89] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:13.042278    3412 system_pods.go:89] "storage-provisioner" [aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:49:13.042295    3412 system_pods.go:126] duration metric: took 200.930278ms to wait for k8s-apps to be running ...
	I0813 20:49:13.042313    3412 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:49:13.042369    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:13.056816    3412 system_svc.go:56] duration metric: took 14.491659ms WaitForService to wait for kubelet.
	I0813 20:49:13.056852    3412 kubeadm.go:547] duration metric: took 1.60212918s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:49:13.056882    3412 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:49:13.236184    3412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:49:13.236241    3412 node_conditions.go:123] node cpu capacity is 2
	I0813 20:49:13.236260    3412 node_conditions.go:105] duration metric: took 179.373183ms to run NodePressure ...
	I0813 20:49:13.236273    3412 start.go:231] waiting for startup goroutines ...
	I0813 20:49:13.296415    3412 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:49:13.298518    3412 out.go:177] * Done! kubectl is now configured to use "pause-20210813204600-30853" cluster and "default" namespace by default
	I0813 20:49:10.080830    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:49:10.579566    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:14.540519    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": read tcp 192.168.50.1:40792->192.168.50.24:8443: read: connection reset by peer
	I0813 20:49:14.579739    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:14.580451    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:15.079298    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:15.079947    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:15.579678    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:15.580450    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:16.078922    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:16.079480    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:16.578921    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:16.579558    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:17.079061    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:17.079634    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:17.578938    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:17.579564    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:18.078941    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:18.079479    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:18.579014    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:18.579747    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:19.078958    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:19.079711    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:19.578954    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:19.579634    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:20.079244    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:20.079886    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:20.579532    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:20.580176    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:21.079797    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:21.080567    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:21.578936    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:21.579550    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:22.079538    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:22.080490    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:22.578975    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:22.579658    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:23.079124    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:23.079812    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:23.579375    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:23.580065    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:24.079644    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:24.080385    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:24.578980    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:24.579783    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:47:17 UTC, end at Fri 2021-08-13 20:49:29 UTC. --
	Aug 13 20:49:28 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:28.892388180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="go-grpc-middleware/chain.go:25" id=771a3774-cef4-4a64-b390-ea6469ec2216 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.630388583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6ca6d833-8093-4c01-a64c-702e0c8fbc06 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.630566577Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6ca6d833-8093-4c01-a64c-702e0c8fbc06 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.630804401Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6ca6d833-8093-4c01-a64c-702e0c8fbc06 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.681380102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a6e508df-0ea4-4ee4-b079-d2fb5e8bcc56 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.681543250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a6e508df-0ea4-4ee4-b079-d2fb5e8bcc56 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.681715977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a6e508df-0ea4-4ee4-b079-d2fb5e8bcc56 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.725364949Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=98b15e38-8fec-407e-8906-f3a4aa4af2f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.725517017Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=98b15e38-8fec-407e-8906-f3a4aa4af2f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.725708912Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=98b15e38-8fec-407e-8906-f3a4aa4af2f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.772567656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dd69bbb0-cd5b-43cb-9fce-ee344f3eb82c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.772714053Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dd69bbb0-cd5b-43cb-9fce-ee344f3eb82c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.772891898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dd69bbb0-cd5b-43cb-9fce-ee344f3eb82c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.813379799Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fbe255c1-1675-46cd-bd17-722b6587dd79 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.813525899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fbe255c1-1675-46cd-bd17-722b6587dd79 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.813692054Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fbe255c1-1675-46cd-bd17-722b6587dd79 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.850898837Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=207281fd-80e1-43c2-a510-7ec49c45afaf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.851250328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=207281fd-80e1-43c2-a510-7ec49c45afaf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.851686499Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=207281fd-80e1-43c2-a510-7ec49c45afaf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.886782917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c46b7b65-1992-4655-9c5b-7c792374d3f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.886928357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c46b7b65-1992-4655-9c5b-7c792374d3f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.887079041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c46b7b65-1992-4655-9c5b-7c792374d3f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.928369585Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8dd246df-d06b-4260-b5b4-4f3fef280ecd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.928514396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8dd246df-d06b-4260-b5b4-4f3fef280ecd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:29 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:29.928672277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8dd246df-d06b-4260-b5b4-4f3fef280ecd name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	10dab2af99578       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago       Running             storage-provisioner       0                   2a6ab48b5042a
	d33287457e451       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   About a minute ago   Running             coredns                   0                   8088cc5d3d38a
	2e50c328d7104       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   About a minute ago   Running             kube-proxy                0                   564d5f18f75ed
	ac4bf726a8a57       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   About a minute ago   Running             etcd                      0                   e992003133001
	66655950d3afa       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   About a minute ago   Running             kube-scheduler            0                   55ddf08f50f8c
	83df9633ff352       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   About a minute ago   Running             kube-controller-manager   0                   6c56d5bf50b7a
	82d4de99d88e5       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   About a minute ago   Running             kube-apiserver            0                   f228ab759c26a
	
	* 
	* ==> coredns [d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	[INFO] Reloading complete
	I0813 20:48:56.155624       1 trace.go:205] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.152) (total time: 30002ms):
	Trace[1427131847]: [30.002619331s] [30.002619331s] END
	E0813 20:48:56.155739       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:48:56.155858       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.154) (total time: 30001ms):
	Trace[911902081]: [30.001733139s] [30.001733139s] END
	E0813 20:48:56.155865       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:48:56.155918       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.152) (total time: 30002ms):
	Trace[2019727887]: [30.002706635s] [30.002706635s] END
	E0813 20:48:56.156104       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210813204600-30853
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210813204600-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=pause-20210813204600-30853
	                    minikube.k8s.io/updated_at=2021_08_13T20_48_11_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:48:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210813204600-30853
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:49:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    pause-20210813204600-30853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	System Info:
	  Machine ID:                 07e647a52575478182b10082d1b9460a
	  System UUID:                07e647a5-2575-4781-82b1-0082d1b9460a
	  Boot ID:                    1c1f8243-ce7f-455c-a669-de6493424040
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-4grvm                              100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     67s
	  kube-system                 etcd-pause-20210813204600-30853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kube-apiserver-pause-20210813204600-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-pause-20210813204600-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-proxy-4n8kb                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-scheduler-pause-20210813204600-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 storage-provisioner                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  96s (x6 over 96s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s (x5 over 96s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s (x5 over 96s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientPID
	  Normal  Starting                 74s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                72s                kubelet     Node pause-20210813204600-30853 status is now: NodeReady
	  Normal  Starting                 64s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000025] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +5.165176] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.050992] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.137498] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1726 comm=systemd-network
	[  +1.376463] vboxguest: loading out-of-tree module taints kernel.
	[  +0.007022] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.624786] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +20.400328] systemd-fstab-generator[2162]: Ignoring "noauto" for root device
	[  +0.134832] systemd-fstab-generator[2175]: Ignoring "noauto" for root device
	[  +0.282454] systemd-fstab-generator[2201]: Ignoring "noauto" for root device
	[  +6.552961] systemd-fstab-generator[2405]: Ignoring "noauto" for root device
	[Aug13 20:48] systemd-fstab-generator[2800]: Ignoring "noauto" for root device
	[ +13.894926] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.479825] kauditd_printk_skb: 80 callbacks suppressed
	[Aug13 20:49] kauditd_printk_skb: 14 callbacks suppressed
	[  +4.187207] systemd-fstab-generator[4013]: Ignoring "noauto" for root device
	[  +0.260965] systemd-fstab-generator[4026]: Ignoring "noauto" for root device
	[  +0.242550] systemd-fstab-generator[4048]: Ignoring "noauto" for root device
	[  +3.941917] systemd-fstab-generator[4299]: Ignoring "noauto" for root device
	[  +0.801138] systemd-fstab-generator[4353]: Ignoring "noauto" for root device
	[  +1.042940] systemd-fstab-generator[4407]: Ignoring "noauto" for root device
	[  +7.458737] systemd-fstab-generator[4909]: Ignoring "noauto" for root device
	[  +0.592251] systemd-fstab-generator[4937]: Ignoring "noauto" for root device
	[  +1.206597] NFSD: Unable to end grace period: -110
	
	* 
	* ==> etcd [ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf] <==
	* 2021-08-13 20:48:01.922733 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:48:01.952757 I | embed: serving client requests on 192.168.39.61:2379
	2021-08-13 20:48:01.954160 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:48:01.975055 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:48:12.629799 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (355.071918ms) to execute
	2021-08-13 20:48:18.621673 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" " with result "range_response_count:0 size:5" took too long (1.837036221s) to execute
	2021-08-13 20:48:18.622362 W | wal: sync duration of 1.607346013s, expected less than 1s
	2021-08-13 20:48:18.623060 W | etcdserver: request "header:<ID:12771218163585540132 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-20210813204600-30853.169af8bae7fa23bf\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-20210813204600-30853.169af8bae7fa23bf\" value_size:632 lease:3547846126730764118 >> failure:<>>" with result "size:16" took too long (1.606807479s) to execute
	2021-08-13 20:48:18.624926 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.461501725s) to execute
	2021-08-13 20:48:18.628021 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210813204600-30853\" " with result "range_response_count:1 size:3982" took too long (1.370325429s) to execute
	2021-08-13 20:48:21.346921 W | wal: sync duration of 1.299304523s, expected less than 1s
	2021-08-13 20:48:21.347401 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.068677828s) to execute
	2021-08-13 20:48:24.481477 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:26.500706 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813204600-30853\" " with result "range_response_count:1 size:6093" took too long (233.724165ms) to execute
	2021-08-13 20:48:26.501137 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-gm2bv\" " with result "range_response_count:1 size:4473" took too long (378.683681ms) to execute
	2021-08-13 20:48:26.502059 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-4grvm\" " with result "range_response_count:1 size:4461" took too long (270.883259ms) to execute
	2021-08-13 20:48:28.869625 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:38.868019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:48.868044 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:58.870803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:49:00.399177 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (1.157615469s) to execute
	2021-08-13 20:49:00.400612 W | etcdserver: request "header:<ID:12771218163585540646 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" mod_revision:468 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" value_size:584 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" > >>" with result "size:16" took too long (200.747119ms) to execute
	2021-08-13 20:49:00.400917 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813204600-30853\" " with result "range_response_count:1 size:6093" took too long (1.158534213s) to execute
	2021-08-13 20:49:00.401297 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (569.698ms) to execute
	2021-08-13 20:49:08.868736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:49:30 up 2 min,  0 users,  load average: 1.64, 0.76, 0.29
	Linux pause-20210813204600-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b] <==
	* Trace[1175388272]: [1.383273804s] [1.383273804s] END
	I0813 20:48:18.647776       1 trace.go:205] Trace[1480647024]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.255) (total time: 1391ms):
	Trace[1480647024]: ---"Object stored in database" 1379ms (20:48:00.638)
	Trace[1480647024]: [1.391864844s] [1.391864844s] END
	I0813 20:48:18.651341       1 trace.go:205] Trace[532588033]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.256) (total time: 1395ms):
	Trace[532588033]: [1.395160654s] [1.395160654s] END
	I0813 20:48:18.651913       1 trace.go:205] Trace[486245217]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.256) (total time: 1395ms):
	Trace[486245217]: [1.395849853s] [1.395849853s] END
	I0813 20:48:18.659173       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:48:21.348539       1 trace.go:205] Trace[264690694]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:20.278) (total time: 1070ms):
	Trace[264690694]: [1.070400996s] [1.070400996s] END
	I0813 20:48:22.995388       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:48:23.545730       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:48:37.713151       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:48:37.713388       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:48:37.713410       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:49:00.401993       1 trace.go:205] Trace[875370503]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:48:59.240) (total time: 1161ms):
	Trace[875370503]: ---"About to write a response" 1161ms (20:49:00.401)
	Trace[875370503]: [1.161749328s] [1.161749328s] END
	I0813 20:49:00.403705       1 trace.go:205] Trace[1375945297]: "Get" url:/api/v1/nodes/pause-20210813204600-30853,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.39.1,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:48:59.241) (total time: 1162ms):
	Trace[1375945297]: ---"About to write a response" 1161ms (20:49:00.403)
	Trace[1375945297]: [1.162052238s] [1.162052238s] END
	I0813 20:49:08.639766       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:49:08.639943       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:49:08.639963       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659] <==
	* I0813 20:48:22.670523       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0813 20:48:22.676047       1 shared_informer.go:247] Caches are synced for job 
	I0813 20:48:22.676648       1 shared_informer.go:247] Caches are synced for GC 
	I0813 20:48:22.680632       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0813 20:48:22.680827       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0813 20:48:22.713877       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0813 20:48:22.743162       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:48:22.743798       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:48:22.849717       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0813 20:48:22.888695       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:48:22.888733       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:48:22.923738       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:48:22.923844       1 disruption.go:371] Sending events to api server.
	I0813 20:48:22.939921       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:48:23.006118       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4n8kb"
	E0813 20:48:23.080425       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"4ec5a127-3b2a-4f66-8321-f0bab85709c0", ResourceVersion:"304", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764484491, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000abfda0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000abfdb8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc0014a9280), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00142b740), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000abf
dd0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000abfde8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014a92c0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001419440), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00144e5a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000843e30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00163c430)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00144e608)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0813 20:48:23.316478       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:48:23.352329       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:48:23.352427       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:48:23.554638       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:48:23.583893       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:48:23.645559       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-gm2bv"
	I0813 20:48:23.652683       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-4grvm"
	I0813 20:48:23.772425       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-gm2bv"
	
	* 
	* ==> kube-proxy [2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164] <==
	* I0813 20:48:26.523023       1 node.go:172] Successfully retrieved node IP: 192.168.39.61
	I0813 20:48:26.523578       1 server_others.go:140] Detected node IP 192.168.39.61
	W0813 20:48:26.523867       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:48:26.597173       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:48:26.597466       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:48:26.597629       1 server_others.go:212] Using iptables Proxier.
	I0813 20:48:26.599876       1 server.go:643] Version: v1.21.3
	I0813 20:48:26.601871       1 config.go:315] Starting service config controller
	I0813 20:48:26.601925       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:48:26.601964       1 config.go:224] Starting endpoint slice config controller
	I0813 20:48:26.601993       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:48:26.626937       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:48:26.631306       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:48:26.702322       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:48:26.702322       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf] <==
	* E0813 20:48:07.253858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:48:07.253939       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:48:07.254089       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:07.254299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:07.254407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:48:07.254763       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:48:07.256625       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:48:07.257805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:48:07.257988       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:48:07.258811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:48:07.259413       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:48:07.261132       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.091658       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:48:08.147159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:48:08.202089       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:48:08.257172       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:48:08.318956       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.416964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:48:08.426635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:48:08.429682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.498271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.623065       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:48:08.623400       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.652497       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0813 20:48:11.848968       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:47:17 UTC, end at Fri 2021-08-13 20:49:30 UTC. --
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.397743    4917 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.397782    4917 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.397793    4917 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.397803    4917 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.397879    4917 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.397942    4917 remote_runtime.go:62] parsed scheme: ""
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.397951    4917 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398002    4917 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398022    4917 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398319    4917 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398339    4917 remote_image.go:50] parsed scheme: ""
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398345    4917 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398368    4917 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398376    4917 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398465    4917 kubelet.go:404] "Attempting to sync node with API server"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398484    4917 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398529    4917 kubelet.go:283] "Adding apiserver pod source"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398543    4917 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398824    4917 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.404419    4917 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="cri-o" version="1.20.2" apiVersion="v1alpha1"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: E0813 20:49:28.719939    4917 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.721488    4917 server.go:1190] "Started kubelet"
	Aug 13 20:49:28 pause-20210813204600-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:49:28 pause-20210813204600-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5] <==
	* I0813 20:49:13.139876       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:49:13.163404       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:49:13.163867       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:49:13.184473       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:49:13.184758       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c!
	I0813 20:49:13.194291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e31d828-490b-41db-8431-f66bfdb15cd4", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c became leader
	I0813 20:49:13.286143       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813204600-30853 -n pause-20210813204600-30853
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813204600-30853 -n pause-20210813204600-30853: exit status 2 (273.595091ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210813204600-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210813204600-30853 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210813204600-30853 describe pod : exit status 1 (67.245064ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210813204600-30853 describe pod : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813204600-30853 -n pause-20210813204600-30853
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813204600-30853 -n pause-20210813204600-30853: exit status 2 (305.812979ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813204600-30853 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p pause-20210813204600-30853 logs -n 25: (1.288940745s)
helpers_test.go:253: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:02 UTC | Fri, 13 Aug 2021 20:43:38 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | --wait=true --preload=false             |                                         |         |         |                               |                               |
	|         | --driver=kvm2                           |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0            |                                         |         |         |                               |                               |
	| ssh     | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:38 UTC | Fri, 13 Aug 2021 20:43:41 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | -- sudo crictl pull busybox             |                                         |         |         |                               |                               |
	| start   | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:43:41 UTC | Fri, 13 Aug 2021 20:44:22 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2          |                                         |         |         |                               |                               |
	|         |  --container-runtime=crio               |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3            |                                         |         |         |                               |                               |
	| ssh     | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:22 UTC | Fri, 13 Aug 2021 20:44:22 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	|         | -- sudo crictl image ls                 |                                         |         |         |                               |                               |
	| -p      | test-preload-20210813204102-30853       | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:22 UTC | Fri, 13 Aug 2021 20:44:24 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| delete  | -p                                      | test-preload-20210813204102-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:25 UTC | Fri, 13 Aug 2021 20:44:26 UTC |
	|         | test-preload-20210813204102-30853       |                                         |         |         |                               |                               |
	| start   | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:26 UTC | Fri, 13 Aug 2021 20:45:21 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2             |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| stop    | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:21 UTC | Fri, 13 Aug 2021 20:45:21 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --cancel-scheduled                      |                                         |         |         |                               |                               |
	| stop    | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:34 UTC | Fri, 13 Aug 2021 20:45:42 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	|         | --schedule 5s                           |                                         |         |         |                               |                               |
	| delete  | -p                                      | scheduled-stop-20210813204426-30853     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:59 UTC | Fri, 13 Aug 2021 20:46:00 UTC |
	|         | scheduled-stop-20210813204426-30853     |                                         |         |         |                               |                               |
	| start   | -p                                      | force-systemd-env-20210813204600-30853  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:47:02 UTC |
	|         | force-systemd-env-20210813204600-30853  |                                         |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | -v=5 --driver=kvm2                      |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| delete  | -p                                      | force-systemd-env-20210813204600-30853  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:02 UTC | Fri, 13 Aug 2021 20:47:03 UTC |
	|         | force-systemd-env-20210813204600-30853  |                                         |         |         |                               |                               |
	| delete  | -p                                      | kubenet-20210813204703-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:03 UTC | Fri, 13 Aug 2021 20:47:03 UTC |
	|         | kubenet-20210813204703-30853            |                                         |         |         |                               |                               |
	| delete  | -p false-20210813204703-30853           | false-20210813204703-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:04 UTC | Fri, 13 Aug 2021 20:47:04 UTC |
	| start   | -p                                      | kubernetes-upgrade-20210813204600-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:47:42 UTC |
	|         | kubernetes-upgrade-20210813204600-30853 |                                         |         |         |                               |                               |
	|         | --memory=2200                           |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0            |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2    |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| stop    | -p                                      | kubernetes-upgrade-20210813204600-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:42 UTC | Fri, 13 Aug 2021 20:47:44 UTC |
	|         | kubernetes-upgrade-20210813204600-30853 |                                         |         |         |                               |                               |
	| start   | -p                                      | offline-crio-20210813204600-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:48:55 UTC |
	|         | offline-crio-20210813204600-30853       |                                         |         |         |                               |                               |
	|         | --alsologtostderr                       |                                         |         |         |                               |                               |
	|         | -v=1 --memory=2048                      |                                         |         |         |                               |                               |
	|         | --wait=true --driver=kvm2               |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| delete  | -p                                      | offline-crio-20210813204600-30853       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:55 UTC | Fri, 13 Aug 2021 20:48:57 UTC |
	|         | offline-crio-20210813204600-30853       |                                         |         |         |                               |                               |
	| start   | -p pause-20210813204600-30853           | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:46:00 UTC | Fri, 13 Aug 2021 20:49:06 UTC |
	|         | --memory=2048                           |                                         |         |         |                               |                               |
	|         | --install-addons=false                  |                                         |         |         |                               |                               |
	|         | --wait=all --driver=kvm2                |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| start   | -p pause-20210813204600-30853           | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:06 UTC | Fri, 13 Aug 2021 20:49:13 UTC |
	|         | --alsologtostderr                       |                                         |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                      |                                         |         |         |                               |                               |
	|         | --container-runtime=crio                |                                         |         |         |                               |                               |
	| -p      | pause-20210813204600-30853              | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:16 UTC | Fri, 13 Aug 2021 20:49:17 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| -p      | pause-20210813204600-30853              | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:18 UTC | Fri, 13 Aug 2021 20:49:19 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| -p      | pause-20210813204600-30853              | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:20 UTC | Fri, 13 Aug 2021 20:49:21 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	| unpause | -p pause-20210813204600-30853           | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:22 UTC | Fri, 13 Aug 2021 20:49:23 UTC |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                               |                               |
	| -p      | pause-20210813204600-30853              | pause-20210813204600-30853              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:29 UTC | Fri, 13 Aug 2021 20:49:30 UTC |
	|         | logs -n 25                              |                                         |         |         |                               |                               |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:49:06
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:49:06.750460    3412 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:49:06.750532    3412 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:06.750535    3412 out.go:311] Setting ErrFile to fd 2...
	I0813 20:49:06.750538    3412 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:06.750645    3412 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:49:06.750968    3412 out.go:305] Setting JSON to false
	I0813 20:49:06.794979    3412 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":9109,"bootTime":1628878638,"procs":188,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:49:06.795299    3412 start.go:121] virtualization: kvm guest
	I0813 20:49:06.798215    3412 out.go:177] * [pause-20210813204600-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:49:06.799922    3412 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:49:06.798386    3412 notify.go:169] Checking for updates...
	I0813 20:49:06.801691    3412 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:49:06.803336    3412 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:49:06.804849    3412 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:49:06.805220    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:06.805637    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.805697    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.817202    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35163
	I0813 20:49:06.817597    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.818173    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.818195    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.818649    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.818887    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.819077    3412 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:49:06.819425    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.819465    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.830844    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0813 20:49:06.831324    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.831848    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.831871    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.832233    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.832415    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.865593    3412 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 20:49:06.865627    3412 start.go:278] selected driver: kvm2
	I0813 20:49:06.865641    3412 start.go:751] validating driver "kvm2" against &{Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:06.865757    3412 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:49:06.866497    3412 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:49:06.866703    3412 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:49:06.878129    3412 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:49:06.878764    3412 cni.go:93] Creating CNI manager for ""
	I0813 20:49:06.878779    3412 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:49:06.878789    3412 start_flags.go:277] config:
	{Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:06.878936    3412 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:49:06.881128    3412 out.go:177] * Starting control plane node pause-20210813204600-30853 in cluster pause-20210813204600-30853
	I0813 20:49:06.881153    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:06.881197    3412 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:49:06.881216    3412 cache.go:56] Caching tarball of preloaded images
	I0813 20:49:06.881339    3412 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:49:06.881361    3412 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:49:06.881476    3412 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/config.json ...
	I0813 20:49:06.881656    3412 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:49:06.881687    3412 start.go:313] acquiring machines lock for pause-20210813204600-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:49:06.881775    3412 start.go:317] acquired machines lock for "pause-20210813204600-30853" in 71.324µs
	I0813 20:49:06.881794    3412 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:49:06.881801    3412 fix.go:55] fixHost starting: 
	I0813 20:49:06.882135    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:06.882177    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:06.894411    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0813 20:49:06.894958    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:06.895630    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:06.895652    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:06.896024    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:06.896206    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.896395    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:06.899827    3412 fix.go:108] recreateIfNeeded on pause-20210813204600-30853: state=Running err=<nil>
	W0813 20:49:06.899844    3412 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:49:05.079802    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:06.902070    3412 out.go:177] * Updating the running kvm2 "pause-20210813204600-30853" VM ...
	I0813 20:49:06.902100    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.902283    3412 machine.go:88] provisioning docker machine ...
	I0813 20:49:06.902305    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:06.902430    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:06.902571    3412 buildroot.go:166] provisioning hostname "pause-20210813204600-30853"
	I0813 20:49:06.902599    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:06.902737    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:06.908023    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:06.908395    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:06.908431    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:06.908509    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:06.908703    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:06.908861    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:06.908990    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:06.909175    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:06.909381    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:06.909399    3412 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210813204600-30853 && echo "pause-20210813204600-30853" | sudo tee /etc/hostname
	I0813 20:49:07.062168    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210813204600-30853
	
	I0813 20:49:07.062210    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.068189    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.068544    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.068577    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.068759    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.068953    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.069117    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.069259    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.069439    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:07.069612    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:07.069649    3412 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210813204600-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210813204600-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210813204600-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:49:07.221530    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:49:07.221612    3412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:49:07.221648    3412 buildroot.go:174] setting up certificates
	I0813 20:49:07.221660    3412 provision.go:83] configureAuth start
	I0813 20:49:07.221672    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetMachineName
	I0813 20:49:07.221918    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:07.227471    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.227839    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.227868    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.228085    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.232869    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.233213    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.233251    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.233347    3412 provision.go:138] copyHostCerts
	I0813 20:49:07.233436    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:49:07.233450    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:49:07.233511    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:49:07.233650    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:49:07.233667    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:49:07.233695    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:49:07.233774    3412 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:49:07.233784    3412 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:49:07.233812    3412 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:49:07.233859    3412 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.pause-20210813204600-30853 san=[192.168.39.61 192.168.39.61 localhost 127.0.0.1 minikube pause-20210813204600-30853]
	I0813 20:49:07.320299    3412 provision.go:172] copyRemoteCerts
	I0813 20:49:07.320390    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:49:07.320428    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.325783    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.326112    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.326152    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.326310    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.326478    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.326610    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.326733    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:07.427180    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:49:07.450672    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0813 20:49:07.471272    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:49:07.489660    3412 provision.go:86] duration metric: configureAuth took 267.984336ms
	I0813 20:49:07.489686    3412 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:49:07.489862    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:07.489982    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:07.495300    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.495618    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:07.495653    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:07.495797    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:07.495985    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.496150    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:07.496279    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:07.496434    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:07.496609    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:07.496631    3412 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:49:08.602797    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:49:08.602830    3412 machine.go:91] provisioned docker machine in 1.700528876s
	I0813 20:49:08.602841    3412 start.go:267] post-start starting for "pause-20210813204600-30853" (driver="kvm2")
	I0813 20:49:08.602846    3412 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:49:08.602880    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.603196    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:49:08.603247    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.608420    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.608704    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.608735    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.608875    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.609064    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.609198    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.609343    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:08.709733    3412 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:49:08.715709    3412 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:49:08.715731    3412 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:49:08.715792    3412 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:49:08.715871    3412 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 20:49:08.715956    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:49:08.724293    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:49:08.750217    3412 start.go:270] post-start completed in 147.362269ms
	I0813 20:49:08.750260    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.750492    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.756215    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.756621    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.756650    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.756812    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.757034    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.757170    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.757300    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.757480    3412 main.go:130] libmachine: Using SSH client type: native
	I0813 20:49:08.757670    3412 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0813 20:49:08.757683    3412 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 20:49:08.900897    3412 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628887748.901369788
	
	I0813 20:49:08.900932    3412 fix.go:212] guest clock: 1628887748.901369788
	I0813 20:49:08.900944    3412 fix.go:225] Guest: 2021-08-13 20:49:08.901369788 +0000 UTC Remote: 2021-08-13 20:49:08.750472863 +0000 UTC m=+2.052052145 (delta=150.896925ms)
	I0813 20:49:08.900988    3412 fix.go:196] guest clock delta is within tolerance: 150.896925ms
	I0813 20:49:08.900996    3412 fix.go:57] fixHost completed within 2.019194265s
	I0813 20:49:08.901002    3412 start.go:80] releasing machines lock for "pause-20210813204600-30853", held for 2.019216553s
	I0813 20:49:08.901046    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.901309    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:08.906817    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.907191    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.907257    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.907379    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.907574    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.908140    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:08.908391    3412 ssh_runner.go:149] Run: systemctl --version
	I0813 20:49:08.908418    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.908488    3412 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:49:08.908539    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:08.915229    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.915547    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.915580    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.915727    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.915920    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.916011    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.916080    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.916237    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:08.916429    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:08.916461    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:08.916636    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:08.916784    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:08.917107    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:08.917257    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:09.014176    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:09.014353    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:09.061257    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:09.061287    3412 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:49:09.061352    3412 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:49:09.075880    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:49:09.086949    3412 docker.go:153] disabling docker service ...
	I0813 20:49:09.087012    3412 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:49:09.103245    3412 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:49:09.117178    3412 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:49:09.373507    3412 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:49:09.585738    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:49:09.599794    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:49:09.615240    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:49:09.623727    3412 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:49:09.630919    3412 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:49:09.637747    3412 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:49:09.808564    3412 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:49:09.952030    3412 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:49:09.952144    3412 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:49:09.959400    3412 start.go:413] Will wait 60s for crictl version
	I0813 20:49:09.959452    3412 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:49:09.991124    3412 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 20:49:09.991251    3412 ssh_runner.go:149] Run: crio --version
	I0813 20:49:10.280528    3412 ssh_runner.go:149] Run: crio --version
	I0813 20:49:10.528655    3412 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 20:49:10.528694    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetIP
	I0813 20:49:10.534359    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:10.534782    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:10.534815    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:10.535076    3412 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 20:49:10.539953    3412 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:49:10.540017    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:10.583397    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:10.583419    3412 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:49:10.583459    3412 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:49:10.620617    3412 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:49:10.620642    3412 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:49:10.620703    3412 ssh_runner.go:149] Run: crio config
	I0813 20:49:10.896405    3412 cni.go:93] Creating CNI manager for ""
	I0813 20:49:10.896427    3412 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:49:10.896436    3412 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:49:10.896448    3412 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.61 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210813204600-30853 NodeName:pause-20210813204600-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.61 CgroupDriver:systemd ClientCAFile:/var/lib/m
inikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:49:10.896629    3412 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "pause-20210813204600-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:49:10.896754    3412 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=pause-20210813204600-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.61 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:49:10.896819    3412 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:49:10.911638    3412 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:49:10.911723    3412 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:49:10.920269    3412 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (506 bytes)
	I0813 20:49:10.933623    3412 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:49:10.945877    3412 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I0813 20:49:10.958716    3412 ssh_runner.go:149] Run: grep 192.168.39.61	control-plane.minikube.internal$ /etc/hosts
	I0813 20:49:10.962845    3412 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853 for IP: 192.168.39.61
	I0813 20:49:10.962912    3412 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:49:10.962936    3412 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:49:10.963041    3412 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.key
	I0813 20:49:10.963067    3412 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.key.e9ce627b
	I0813 20:49:10.963088    3412 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.key
	I0813 20:49:10.963223    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 20:49:10.963274    3412 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 20:49:10.963290    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:49:10.963332    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:49:10.963362    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:49:10.963395    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:49:10.963481    3412 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 20:49:10.964763    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:49:10.996208    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:49:11.015193    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:49:11.032382    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:49:11.050461    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:49:11.067415    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 20:49:11.085267    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:49:11.102588    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:49:11.128113    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:49:11.146008    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 20:49:11.162723    3412 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 20:49:11.181637    3412 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:49:11.193799    3412 ssh_runner.go:149] Run: openssl version
	I0813 20:49:11.199783    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 20:49:11.209928    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.214459    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.214508    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 20:49:11.221207    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:49:11.229476    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:49:11.237550    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.245454    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.245501    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:49:11.251754    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:49:11.258461    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 20:49:11.267146    3412 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.271736    3412 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.271779    3412 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 20:49:11.278000    3412 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 20:49:11.284415    3412 kubeadm.go:390] StartCluster: {Name:pause-20210813204600-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clu
sterName:pause-20210813204600-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:49:11.284518    3412 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:49:11.284561    3412 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:49:11.324305    3412 cri.go:76] found id: "d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6"
	I0813 20:49:11.324324    3412 cri.go:76] found id: "2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164"
	I0813 20:49:11.324329    3412 cri.go:76] found id: "ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf"
	I0813 20:49:11.324336    3412 cri.go:76] found id: "66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf"
	I0813 20:49:11.324339    3412 cri.go:76] found id: "83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659"
	I0813 20:49:11.324343    3412 cri.go:76] found id: "82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b"
	I0813 20:49:11.324347    3412 cri.go:76] found id: ""
	I0813 20:49:11.324383    3412 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:49:11.370394    3412 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","pid":3260,"status":"running","bundle":"/run/containers/storage/overlay-containers/2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164/userdata","rootfs":"/var/lib/containers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","created":"2021-08-13T20:48:25.650799846Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7bfe6d1f","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7bfe6d1f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termination
MessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.433420822Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/c
ontainers/storage/overlay/1ccc2d84fe0a60d461dd72af86d7f5780b2da5976de38939c7890b8a3d55de18/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet
/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/containers/kube-proxy/b214a802\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/volumes/kubernetes.io~projected/kube-api-access-qrwsr\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.prop
erty.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","pid":2560,"status":"running","bundle":"/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata","rootfs":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","created":"2021-08-13T20:47:58.451921584Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170563888Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podace00bb4fb8a8a9569ff7dae47e01d30.slice","io.kubernetes.cri-o.ContainerID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.Contai
nerName":"k8s_POD_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.734913609Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30
853_ace00bb4fb8a8a9569ff7dae47e01d30/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210813204600-30853\",\"uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4b7885cf6e1abafb50756a5cea5af4f5e83b1549d14d3e0bd81d90f70e9dcccd/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c3
9b238025a67ffbc7ea","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","pid":3063,"status":"running","bundle":"/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata","rootfs":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe9
37ca4df37/merged","created":"2021-08-13T20:48:24.164151322Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.030706859Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod7c8a1bad_1f97_44ad_a3e3_fb9d52cfd0d9.slice","io.kubernetes.cri-o.ContainerID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.009794742Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/hostname","i
o.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-4n8kb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-proxy-4n8kb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"7cdcb64568\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4n8kb_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-4n8kb\",\"uid\":\"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cd391b0b2ac3c0b70b38aff365f4775ede94606e02528bf3325afe937ca4df37/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-4n8kb_kube-system_7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9_0","io.kubernetes
.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057/userdata/shm","io.kubernetes.pod.name":"kube-proxy-4n8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:48:23.030706859Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactiv
e-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","pid":2712,"status":"running","bundle":"/run/containers/storage/overlay-containers/66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf/userdata","rootfs":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","created":"2021-08-13T20:48:00.371988051Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.05184871Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ace00bb4fb8a8a9569ff7dae47e01d30\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813204600-30853_ace00bb4fb8a8a9569ff7dae47e01d30/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"ku
be-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d940b9d4b91b1eb9ad4ede528f46bfcdb90fefb5d34c475d3a309f82de158c9a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210813204600-30853_kube-system_ace00bb4fb8a8a9569ff7dae47e01d30_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/etc-hosts\",\"readonly\":false},{\"cont
ainer_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ace00bb4fb8a8a9569ff7dae47e01d30/containers/kube-scheduler/1a90a935\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.hash":"ace00bb4fb8a8a9569ff7dae47e01d30","kubernetes.io/config.seen":"2021-08-13T20:47:54.170563888Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","pid":2531,"status":"running","bundle":"/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af7
8ce2fb71d82b52d87fa45aaf3/userdata","rootfs":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","created":"2021-08-13T20:47:58.134632094Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b586eaff819d4c98a938914befbf359d\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170560054Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podb586eaff819d4c98a938914befbf359d.slice","io.kubernetes.cri-o.ContainerID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.58849323Z","io.kubernetes.cri-o.HostName":"pause-20210
813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210813204600-30853\",\"uid\":\"b586eaff81
9d4c98a938914befbf359d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d3be0b4a23c201acae9eeedbfeee87580085e04bcd877f77dfbafcbc55e170a4/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d8
2b52d87fa45aaf3/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","pid":3202,"status":"running","bundle":"/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata","rootfs":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","created":"2021-08-13T20:48:25.02088557Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/co
nfig.seen\":\"2021-08-13T20:48:23.684666458Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth769c0295\",\"mac\":\"0e:7f:8d:fd:2a:c5\"},{\"name\":\"eth0\",\"mac\":\"46:39:40:9e:ad:d7\",\"sandbox\":\"/var/run/netns/70e99836-e661-4e4f-bfb4-1e8d94b25ad2\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod72033717_35d7_4397_b3c5_28028e7270f3.slice","io.kubernetes.cri-o.ContainerID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.356545063Z","io.kubernetes.cri-o.H
ostName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-4grvm","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grvm\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-4grvm\",\"uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.M
ountPoint":"/var/lib/containers/storage/overlay/1686d27a9ba4b2de7b24a7aac6b0a637ffbbfdb2a8f5a96ecf53cf8030bc0437/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"k
ube-system","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","pid":2593,"status":"running","bundle":"/run/containers/storage/overlay-containers/82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b/userdata","rootfs":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","created":"2021-08-13T20:47:59.106710832Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"46519583","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"Fi
le","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"46519583\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:58.700311118Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.ui
d\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/664fcbcaa74eb758ebbaf7a5f05481f3b2596df896b908aa7ac89d4c20a7077f/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kube
rnetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cb76671b6b79a1d552449a94a3dbfa98/containers/kube-apiserver/d05226bf\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.
61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","pid":2654,"status":"running","bundle":"/run/containers/storage/overlay-containers/83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659/userdata","rootfs":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","created":"2021-08-13T20:47:59.879440634Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dfe11a","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePol
icy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dfe11a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:47:59.302380713Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kub
e-system\",\"io.kubernetes.pod.uid\":\"b586eaff819d4c98a938914befbf359d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813204600-30853_b586eaff819d4c98a938914befbf359d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/447b863ea64efc8f150086d288c4990f0d8bc1a6ed928e5ee1fb9545ad3877cd/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210813204600-30853_kube-system_b586eaff819d4c98a938914befbf359d_0","
io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/containers/kube-controller-manager/3fd07eff\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b586eaff819d4c98a938914befbf359d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/v
olume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.hash":"b586eaff819d4c98a938914befbf359d","kubernetes.io/config.seen":"2021-08-13T20:47:54.170560054Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata","rootfs":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","created":"2021-08-13T20:
48:24.985669139Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-13T20:48:23.664842879Z\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fe:4d:4e:cc:e0:8e\"},{\"name\":\"veth8015c076\",\"mac\":\"b6:65:b6:ec:41:c5\"},{\"name\":\"eth0\",\"mac\":\"e2:c2:94:2c:86:54\",\"sandbox\":\"/var/run/netns/18863c2e-48ba-4850-8146-8e155524b6dd\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.3/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod21759cc2_1fdb_417f_bc71_01fb6f9d0c35.slice","io.kubernetes.cri-o.ContainerID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-41
7f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:48:24.319998358Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-gm2bv","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-gm2bv\",\"k8s-app\":\"kube-dns\",\"pod-template-hash\":\"558bd4d5db\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-gm2bv_21759cc2-1fdb-417f-bc71-01fb6f9d0c35/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540.log","io.kubernetes.cri-
o.Metadata":"{\"name\":\"coredns-558bd4d5db-gm2bv\",\"uid\":\"21759cc2-1fdb-417f-bc71-01fb6f9d0c35\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3b63c5c5a6dfb41be1ab6e52f67cddb89988a7d6f1a644f5b0532602605d5ddc/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-gm2bv_kube-system_21759cc2-1fdb-417f-bc71-01fb6f9d0c35_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9f35d968
848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-gm2bv","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"21759cc2-1fdb-417f-bc71-01fb6f9d0c35","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:48:23.664842879Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf","pid":2754,"status":"running","bundle":"/run/containers/storage/overlay-containers/ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf/userdata","rootfs":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","created":"2021-08-13T20:48:00.893103098Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5d26fc81","io.kubernetes.container.name":"etcd","io.kubernetes.container
.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5d26fc81\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:00.424653769Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.p
od.name\":\"etcd-pause-20210813204600-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9cd6ad4099543d3f72807024ae228a3d3318ed5de379789a23413cf59d01aaf2/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.SeccompProf
ilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/545d21e989d5ed3752d22eeb8bd8ffce/containers/etcd/7df814d9\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d2
2eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","pid":3295,"status":"running","bundle":"/run/containers/storage/overlay-containers/d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6/userdata","rootfs":"/var/lib/containers/storage/overlay/6c5dd04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","created":"2021-08-13T20:48:25.853932123Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"861ab352","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.
kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"861ab352\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:48:25.56946163Z","io.kubernetes.cri-o.IP.0":"10
.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-4grvm\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"72033717-35d7-4397-b3c5-28028e7270f3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-4grvm_72033717-35d7-4397-b3c5-28028e7270f3/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6c5dd04ef33781294728c72d72256a89ec1625c26da13f152f0017322c9f3a81/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/
storage/overlay-containers/8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-4grvm_kube-system_72033717-35d7-4397-b3c5-28028e7270f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/containers/coredns/baf35c8d\",\"readonly\":false},{\"container_path\
":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/72033717-35d7-4397-b3c5-28028e7270f3/volumes/kubernetes.io~projected/kube-api-access-zsj85\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-4grvm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72033717-35d7-4397-b3c5-28028e7270f3","kubernetes.io/config.seen":"2021-08-13T20:48:23.684666458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","pid":2552,"status":"running","bundle":"/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata","rootfs":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","created"
:"2021-08-13T20:47:58.569818878Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.61:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170566946Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod545d21e989d5ed3752d22eeb8bd8ffce.slice","io.kubernetes.cri-o.ContainerID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.638411495Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/sto
rage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"etcd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813204600-30853_545d21e989d5ed3752d22eeb8bd8ffce/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210813204600-30853\",\"uid\":\"545d21e989d5ed3752d22eeb8bd8ffce\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/021a30871f7d495f915a8f91bf415f3cbd9c4d8d382e52911ebc9c25c3f13a41/merged","io.kubernet
es.cri-o.Name":"k8s_etcd-pause-20210813204600-30853_kube-system_545d21e989d5ed3752d22eeb8bd8ffce_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"545d21e989d5ed3752d22eeb8bd8ffce","kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":"https://192.168.39.61:2379","kubernetes.io/config.hash":"545d21e989d5ed3752d22eeb8bd8ffce","kubernetes.io/config.seen":"2021-08-13T20:47:54.170566946Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","pid":2497,"status":"running","bundle":"/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata","rootfs":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","created":"2021-08-13T20:47:57.759478731Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:47:54.170508472Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"cb76671b6b79a1d55244
9a94a3dbfa98\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.61:8443\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice","io.kubernetes.cri-o.ContainerID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:47:57.128395566Z","io.kubernetes.cri-o.HostName":"pause-20210813204600-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",
\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813204600-30853\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"cb76671b6b79a1d552449a94a3dbfa98\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813204600-30853_cb76671b6b79a1d552449a94a3dbfa98/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210813204600-30853\",\"uid\":\"cb76671b6b79a1d552449a94a3dbfa98\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c9907c7e779c8eeb339895a3368ece476810a9eeae6886b44ac08f4be1a89c6e/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210813204600-30853_kube-system_cb76671b6b79a1d552449a94a3dbfa98_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]",
"io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210813204600-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cb76671b6b79a1d552449a94a3dbfa98","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.61:8443","kubernetes.io/config.hash":"cb76671b6b79a1d552449a94a3dbfa98","kubernetes.io/config.seen":"2021-08-13T20:47:54.170508472Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode"
:"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0813 20:49:11.370977    3412 cri.go:113] list returned 13 containers
	I0813 20:49:11.370992    3412 cri.go:116] container: {ID:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 Status:running}
	I0813 20:49:11.371004    3412 cri.go:122] skipping {2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164 running}: state = "running", want "paused"
	I0813 20:49:11.371014    3412 cri.go:116] container: {ID:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea Status:running}
	I0813 20:49:11.371019    3412 cri.go:118] skipping 55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea - not in ps
	I0813 20:49:11.371023    3412 cri.go:116] container: {ID:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 Status:running}
	I0813 20:49:11.371028    3412 cri.go:118] skipping 564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057 - not in ps
	I0813 20:49:11.371034    3412 cri.go:116] container: {ID:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf Status:running}
	I0813 20:49:11.371040    3412 cri.go:122] skipping {66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf running}: state = "running", want "paused"
	I0813 20:49:11.371048    3412 cri.go:116] container: {ID:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 Status:running}
	I0813 20:49:11.371054    3412 cri.go:118] skipping 6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3 - not in ps
	I0813 20:49:11.371063    3412 cri.go:116] container: {ID:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 Status:running}
	I0813 20:49:11.371069    3412 cri.go:118] skipping 8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271 - not in ps
	I0813 20:49:11.371076    3412 cri.go:116] container: {ID:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b Status:running}
	I0813 20:49:11.371081    3412 cri.go:122] skipping {82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b running}: state = "running", want "paused"
	I0813 20:49:11.371087    3412 cri.go:116] container: {ID:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659 Status:running}
	I0813 20:49:11.371091    3412 cri.go:122] skipping {83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659 running}: state = "running", want "paused"
	I0813 20:49:11.371099    3412 cri.go:116] container: {ID:9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 Status:stopped}
	I0813 20:49:11.371105    3412 cri.go:118] skipping 9f35d968848cc4ec0954671fd80dc8d73695c8256d460f93746944ed89069540 - not in ps
	I0813 20:49:11.371110    3412 cri.go:116] container: {ID:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf Status:running}
	I0813 20:49:11.371115    3412 cri.go:122] skipping {ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf running}: state = "running", want "paused"
	I0813 20:49:11.371119    3412 cri.go:116] container: {ID:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6 Status:running}
	I0813 20:49:11.371127    3412 cri.go:122] skipping {d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6 running}: state = "running", want "paused"
	I0813 20:49:11.371135    3412 cri.go:116] container: {ID:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f Status:running}
	I0813 20:49:11.371144    3412 cri.go:118] skipping e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f - not in ps
	I0813 20:49:11.371154    3412 cri.go:116] container: {ID:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 Status:running}
	I0813 20:49:11.371164    3412 cri.go:118] skipping f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029 - not in ps
	I0813 20:49:11.371203    3412 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:49:11.379585    3412 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:49:11.379610    3412 kubeadm.go:600] restartCluster start
	I0813 20:49:11.379656    3412 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:49:11.387273    3412 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:49:11.388131    3412 kubeconfig.go:93] found "pause-20210813204600-30853" server: "https://192.168.39.61:8443"
	I0813 20:49:11.389906    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.391540    3412 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:49:11.398645    3412 api_server.go:164] Checking apiserver status ...
	I0813 20:49:11.398727    3412 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:49:11.410339    3412 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2593/cgroup
	I0813 20:49:11.416825    3412 api_server.go:180] apiserver freezer: "11:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice/crio-82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b.scope"
	I0813 20:49:11.416874    3412 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb76671b6b79a1d552449a94a3dbfa98.slice/crio-82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b.scope/freezer.state
	I0813 20:49:11.424153    3412 api_server.go:202] freezer state: "THAWED"
	I0813 20:49:11.424172    3412 api_server.go:239] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0813 20:49:11.430386    3412 api_server.go:265] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0813 20:49:11.447400    3412 system_pods.go:86] 6 kube-system pods found
	I0813 20:49:11.447439    3412 system_pods.go:89] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:11.447446    3412 system_pods.go:89] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:11.447453    3412 system_pods.go:89] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:11.447457    3412 system_pods.go:89] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:11.447460    3412 system_pods.go:89] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:11.447465    3412 system_pods.go:89] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:11.448566    3412 api_server.go:139] control plane version: v1.21.3
	I0813 20:49:11.448586    3412 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.39.61
	I0813 20:49:11.448597    3412 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0813 20:49:11.448603    3412 kubeadm.go:604] restartCluster took 68.987456ms
	I0813 20:49:11.448610    3412 kubeadm.go:392] StartCluster complete in 164.201481ms
	I0813 20:49:11.448627    3412 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:49:11.448743    3412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:49:11.449587    3412 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:49:11.450509    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.454641    3412 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210813204600-30853" rescaled to 1
	I0813 20:49:11.454698    3412 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:49:11.454707    3412 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:49:11.456952    3412 out.go:177] * Verifying Kubernetes components...
	I0813 20:49:11.457008    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:11.454754    3412 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:49:11.457069    3412 addons.go:59] Setting storage-provisioner=true in profile "pause-20210813204600-30853"
	I0813 20:49:11.455000    3412 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:49:11.457090    3412 addons.go:135] Setting addon storage-provisioner=true in "pause-20210813204600-30853"
	W0813 20:49:11.457098    3412 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:49:11.457112    3412 addons.go:59] Setting default-storageclass=true in profile "pause-20210813204600-30853"
	I0813 20:49:11.457130    3412 host.go:66] Checking if "pause-20210813204600-30853" exists ...
	I0813 20:49:11.457136    3412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210813204600-30853"
	I0813 20:49:11.457449    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.457490    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.457642    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.457688    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.468728    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0813 20:49:11.469146    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.469685    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.469705    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.470063    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.470584    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.470626    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.476732    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0813 20:49:11.477171    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.477677    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.477701    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.478079    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.478277    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.482479    3412 kapi.go:59] client config for pause-20210813204600-30853: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813204600-30853/client.ke
y", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:49:11.483740    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45299
	I0813 20:49:11.484114    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.484536    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.484555    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.484941    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.485097    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.487884    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:11.490267    3412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:49:11.488882    3412 addons.go:135] Setting addon default-storageclass=true in "pause-20210813204600-30853"
	W0813 20:49:11.490289    3412 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:49:11.490323    3412 host.go:66] Checking if "pause-20210813204600-30853" exists ...
	I0813 20:49:11.490374    3412 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:49:11.490389    3412 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:49:11.490406    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:11.490689    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.490728    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.496655    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.497065    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:11.497093    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.497244    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:11.497423    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:11.497618    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:11.497767    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:11.503422    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34471
	I0813 20:49:11.503821    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.504277    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.504306    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.504582    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.505173    3412 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:49:11.505219    3412 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:49:11.518799    3412 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36859
	I0813 20:49:11.519214    3412 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:49:11.519629    3412 main.go:130] libmachine: Using API Version  1
	I0813 20:49:11.519655    3412 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:49:11.519995    3412 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:49:11.520180    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetState
	I0813 20:49:11.523435    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .DriverName
	I0813 20:49:11.523650    3412 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:49:11.523666    3412 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:49:11.523682    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHHostname
	I0813 20:49:11.529028    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.529396    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:90:8e", ip: ""} in network mk-pause-20210813204600-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:47:21 +0000 UTC Type:0 Mac:52:54:00:50:90:8e Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-20210813204600-30853 Clientid:01:52:54:00:50:90:8e}
	I0813 20:49:11.529423    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | domain pause-20210813204600-30853 has defined IP address 192.168.39.61 and MAC address 52:54:00:50:90:8e in network mk-pause-20210813204600-30853
	I0813 20:49:11.529571    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHPort
	I0813 20:49:11.529736    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHKeyPath
	I0813 20:49:11.529865    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .GetSSHUsername
	I0813 20:49:11.530004    3412 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813204600-30853/id_rsa Username:docker}
	I0813 20:49:11.605965    3412 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:49:11.606090    3412 node_ready.go:35] waiting up to 6m0s for node "pause-20210813204600-30853" to be "Ready" ...
	I0813 20:49:11.610421    3412 node_ready.go:49] node "pause-20210813204600-30853" has status "Ready":"True"
	I0813 20:49:11.610442    3412 node_ready.go:38] duration metric: took 4.320432ms waiting for node "pause-20210813204600-30853" to be "Ready" ...
	I0813 20:49:11.610453    3412 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:49:11.616546    3412 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:49:11.616740    3412 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.631733    3412 pod_ready.go:92] pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.631757    3412 pod_ready.go:81] duration metric: took 14.992576ms waiting for pod "coredns-558bd4d5db-4grvm" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.631771    3412 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.639091    3412 pod_ready.go:92] pod "etcd-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.639117    3412 pod_ready.go:81] duration metric: took 7.33748ms waiting for pod "etcd-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.639129    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.645487    3412 pod_ready.go:92] pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.645508    3412 pod_ready.go:81] duration metric: took 6.370538ms waiting for pod "kube-apiserver-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.645519    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.652583    3412 pod_ready.go:92] pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:11.652602    3412 pod_ready.go:81] duration metric: took 7.073719ms waiting for pod "kube-controller-manager-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.652614    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4n8kb" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:11.658710    3412 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:49:12.038755    3412 pod_ready.go:92] pod "kube-proxy-4n8kb" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:12.038776    3412 pod_ready.go:81] duration metric: took 386.155583ms waiting for pod "kube-proxy-4n8kb" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.038787    3412 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.069005    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069032    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069056    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069036    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069332    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069333    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069336    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069348    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069357    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069364    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069368    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069371    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069377    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069380    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069631    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069649    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069664    3412 main.go:130] libmachine: (pause-20210813204600-30853) DBG | Closing plugin on server side
	I0813 20:49:12.069635    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069693    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.069706    3412 main.go:130] libmachine: Making call to close driver server
	I0813 20:49:12.069717    3412 main.go:130] libmachine: (pause-20210813204600-30853) Calling .Close
	I0813 20:49:12.069914    3412 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:49:12.069931    3412 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:49:12.071889    3412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:49:12.071910    3412 addons.go:344] enableAddons completed in 617.161828ms
	I0813 20:49:12.434704    3412 pod_ready.go:92] pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:12.434726    3412 pod_ready.go:81] duration metric: took 395.931948ms waiting for pod "kube-scheduler-pause-20210813204600-30853" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:12.434734    3412 pod_ready.go:38] duration metric: took 824.269103ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:49:12.434752    3412 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:49:12.434790    3412 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:49:12.451457    3412 api_server.go:70] duration metric: took 996.725767ms to wait for apiserver process to appear ...
	I0813 20:49:12.451487    3412 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:49:12.451500    3412 api_server.go:239] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0813 20:49:12.457776    3412 api_server.go:265] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0813 20:49:12.458697    3412 api_server.go:139] control plane version: v1.21.3
	I0813 20:49:12.458716    3412 api_server.go:129] duration metric: took 7.221294ms to wait for apiserver health ...
	I0813 20:49:12.458726    3412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:49:12.637203    3412 system_pods.go:59] 7 kube-system pods found
	I0813 20:49:12.637240    3412 system_pods.go:61] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:12.637248    3412 system_pods.go:61] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:12.637254    3412 system_pods.go:61] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:12.637261    3412 system_pods.go:61] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:12.637266    3412 system_pods.go:61] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:12.637272    3412 system_pods.go:61] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:12.637281    3412 system_pods.go:61] "storage-provisioner" [aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:49:12.637290    3412 system_pods.go:74] duration metric: took 178.557519ms to wait for pod list to return data ...
	I0813 20:49:12.637299    3412 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:49:12.841324    3412 default_sa.go:45] found service account: "default"
	I0813 20:49:12.841350    3412 default_sa.go:55] duration metric: took 204.040505ms for default service account to be created ...
	I0813 20:49:12.841359    3412 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:49:13.042158    3412 system_pods.go:86] 7 kube-system pods found
	I0813 20:49:13.042205    3412 system_pods.go:89] "coredns-558bd4d5db-4grvm" [72033717-35d7-4397-b3c5-28028e7270f3] Running
	I0813 20:49:13.042216    3412 system_pods.go:89] "etcd-pause-20210813204600-30853" [5796d7a2-d937-46ea-9f78-d39873dbed3c] Running
	I0813 20:49:13.042224    3412 system_pods.go:89] "kube-apiserver-pause-20210813204600-30853" [1cd91fb9-a6fe-469c-a0eb-407707a46d7e] Running
	I0813 20:49:13.042237    3412 system_pods.go:89] "kube-controller-manager-pause-20210813204600-30853" [b84efacf-2927-4b3c-a2c7-6fce8f8932c2] Running
	I0813 20:49:13.042245    3412 system_pods.go:89] "kube-proxy-4n8kb" [7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9] Running
	I0813 20:49:13.042257    3412 system_pods.go:89] "kube-scheduler-pause-20210813204600-30853" [1b87678c-2291-4cbc-b1d2-48f551d2265e] Running
	I0813 20:49:13.042278    3412 system_pods.go:89] "storage-provisioner" [aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:49:13.042295    3412 system_pods.go:126] duration metric: took 200.930278ms to wait for k8s-apps to be running ...
	I0813 20:49:13.042313    3412 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:49:13.042369    3412 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:13.056816    3412 system_svc.go:56] duration metric: took 14.491659ms WaitForService to wait for kubelet.
	I0813 20:49:13.056852    3412 kubeadm.go:547] duration metric: took 1.60212918s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:49:13.056882    3412 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:49:13.236184    3412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:49:13.236241    3412 node_conditions.go:123] node cpu capacity is 2
	I0813 20:49:13.236260    3412 node_conditions.go:105] duration metric: took 179.373183ms to run NodePressure ...
	I0813 20:49:13.236273    3412 start.go:231] waiting for startup goroutines ...
	I0813 20:49:13.296415    3412 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:49:13.298518    3412 out.go:177] * Done! kubectl is now configured to use "pause-20210813204600-30853" cluster and "default" namespace by default
	I0813 20:49:10.080830    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:49:10.579566    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:14.540519    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": read tcp 192.168.50.1:40792->192.168.50.24:8443: read: connection reset by peer
	I0813 20:49:14.579739    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:14.580451    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:15.079298    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:15.079947    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:15.579678    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:15.580450    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:16.078922    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:16.079480    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:16.578921    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:16.579558    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:17.079061    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:17.079634    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:17.578938    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:17.579564    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:18.078941    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:18.079479    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:18.579014    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:18.579747    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:19.078958    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:19.079711    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:19.578954    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:19.579634    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:20.079244    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:20.079886    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:20.579532    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:20.580176    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:21.079797    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:21.080567    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:21.578936    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:21.579550    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:22.079538    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:22.080490    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:22.578975    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:22.579658    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:23.079124    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:23.079812    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:23.579375    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:23.580065    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:24.079644    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:24.080385    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:24.578980    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:24.579783    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:25.079466    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:25.080219    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:25.579888    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:25.580575    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:26.078960    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:26.079729    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:26.579276    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:26.580017    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:27.079594    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:27.080267    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:27.579936    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:27.580639    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:28.078928    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:28.079468    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:28.579009    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:28.579597    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:29.079134    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:29.079742    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	I0813 20:49:29.578947    2943 api_server.go:239] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I0813 20:49:29.579539    2943 api_server.go:255] stopped: https://192.168.50.24:8443/healthz: Get "https://192.168.50.24:8443/healthz": dial tcp 192.168.50.24:8443: connect: connection refused
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:47:17 UTC, end at Fri 2021-08-13 20:49:31 UTC. --
	Aug 13 20:49:30 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:30.539746354Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,StartedAt:1628887753062172592,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/containers/storage-provisioner/3a59d7be,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/volumes/kubernetes.io~projected/kube-api-access-8s2qn,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76/storage-prov
isioner/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=b245ad1a-39d7-4fce-9e27-8c5cb8bbe8ff name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.597078086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f8692867-1b37-409c-b1da-ecd460b3b35f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.597867118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f8692867-1b37-409c-b1da-ecd460b3b35f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.598060443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f8692867-1b37-409c-b1da-ecd460b3b35f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.644103242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d7e70794-e0cd-4666-b8d3-43c3d3f6beb5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.644429626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d7e70794-e0cd-4666-b8d3-43c3d3f6beb5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.644600314Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d7e70794-e0cd-4666-b8d3-43c3d3f6beb5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.685374794Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c7086bca-5a26-45d4-a650-963b75f6e546 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.685537612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c7086bca-5a26-45d4-a650-963b75f6e546 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.685729457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c7086bca-5a26-45d4-a650-963b75f6e546 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.732272375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1bc7b250-db9a-42bb-83dc-2fac5105b6ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.732444076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1bc7b250-db9a-42bb-83dc-2fac5105b6ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.732606495Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1bc7b250-db9a-42bb-83dc-2fac5105b6ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.784777540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=98402a0e-0961-4eaa-a95b-d02d305487c7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.784940112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=98402a0e-0961-4eaa-a95b-d02d305487c7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.785148459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=98402a0e-0961-4eaa-a95b-d02d305487c7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.834173567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7a1de32a-6329-4d49-a4c6-3850dc883be8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.834434175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7a1de32a-6329-4d49-a4c6-3850dc883be8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.834749315Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7a1de32a-6329-4d49-a4c6-3850dc883be8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.878551102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=163a3391-d8ce-4577-8d83-4d490457c873 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.878741431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=163a3391-d8ce-4577-8d83-4d490457c873 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.880097332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=163a3391-d8ce-4577-8d83-4d490457c873 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.936900678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9a6bfd64-43a6-43e8-8b05-0e2f393da37d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.937003355Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9a6bfd64-43a6-43e8-8b05-0e2f393da37d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 20:49:31 pause-20210813204600-30853 crio[3772]: time="2021-08-13 20:49:31.937376946Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5,PodSandboxId:2a6ab48b5042ade9b6a37c7aac2fe0cdf97d917d6d85e26985996d3ce5240f5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628887753000895625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2d90d0-7a2e-40cf-b9ac-81fb9e2c1e76,},Annotations:map[string]string{io.kubernetes.container.hash: 739bee08,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6,PodSandboxId:8088cc5d3d38a71eeb18099c6aa71d6cb3800819abdad4360b2447402da7e271,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,State:CONTAINER_RUNNING,CreatedAt:1628887705853932123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-4grvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72033717-35d7-4397-b3c5-28028e7270f3,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab352,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164,PodSandboxId:564d5f18f75ede8f626198a16c95afc903cf2041e09d399d21b77871808de057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,State:CONTAINER_RUNNING,CreatedAt:1628887705650799846,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4n8kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8a1bad-1f97-44ad-a3e3-fb9d52cfd0d9,},Annotations:map[
string]string{io.kubernetes.container.hash: 7bfe6d1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf,PodSandboxId:e9920031330011b4c49134b2a15f55362e2c2240abc371cc80304fecb57cd38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,State:CONTAINER_RUNNING,CreatedAt:1628887680893103098,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 545d21e989d5ed3752d22eeb8bd8ffce,},Annotations:map[string]string{io.kubernetes.container.hash: 5d26fc81,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf,PodSandboxId:55ddf08f50f8cf83834d4a6a622d8d391bff6b2372f6c39b238025a67ffbc7ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,State:CONTAINER_RUNNING,CreatedAt:1628887680371988051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ace00bb4fb8a8a9569ff7dae47e01d30,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659,PodSandboxId:6c56d5bf50b7a35458f6cd14f7241a769059af78ce2fb71d82b52d87fa45aaf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,State:CONTAINER_RUNNING,CreatedAt:1628887679879440634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b586eaff819d4c98a938914befbf359d,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b,PodSandboxId:f228ab759c26a821ce8551f1fff11323baa396e0ed7d0cd4655013fc48ba3029,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,State:CONTAINER_RUNNING,CreatedAt:1628887679106710832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-20210813204600-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb76671b6b79a1d552449a94a3dbfa98,},Annotations:map[string]string{io.kubernetes.container.hash: 46519583,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9a6bfd64-43a6-43e8-8b05-0e2f393da37d name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	10dab2af99578       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   19 seconds ago       Running             storage-provisioner       0                   2a6ab48b5042a
	d33287457e451       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   About a minute ago   Running             coredns                   0                   8088cc5d3d38a
	2e50c328d7104       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   About a minute ago   Running             kube-proxy                0                   564d5f18f75ed
	ac4bf726a8a57       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   About a minute ago   Running             etcd                      0                   e992003133001
	66655950d3afa       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   About a minute ago   Running             kube-scheduler            0                   55ddf08f50f8c
	83df9633ff352       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   About a minute ago   Running             kube-controller-manager   0                   6c56d5bf50b7a
	82d4de99d88e5       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   About a minute ago   Running             kube-apiserver            0                   f228ab759c26a
	
	* 
	* ==> coredns [d33287457e4518ec553ab0fac839363dc548783015b04185218fb87de65163c6] <==
	* I0813 20:48:56.155624       1 trace.go:205] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.152) (total time: 30002ms):
	Trace[1427131847]: [30.002619331s] [30.002619331s] END
	E0813 20:48:56.155739       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:48:56.155858       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.154) (total time: 30001ms):
	Trace[911902081]: [30.001733139s] [30.001733139s] END
	E0813 20:48:56.155865       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:48:56.155918       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:48:26.152) (total time: 30002ms):
	Trace[2019727887]: [30.002706635s] [30.002706635s] END
	E0813 20:48:56.156104       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210813204600-30853
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210813204600-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=pause-20210813204600-30853
	                    minikube.k8s.io/updated_at=2021_08_13T20_48_11_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:48:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210813204600-30853
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:49:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:48:18 +0000   Fri, 13 Aug 2021 20:48:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    pause-20210813204600-30853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	System Info:
	  Machine ID:                 07e647a52575478182b10082d1b9460a
	  System UUID:                07e647a5-2575-4781-82b1-0082d1b9460a
	  Boot ID:                    1c1f8243-ce7f-455c-a669-de6493424040
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-4grvm                              100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     69s
	  kube-system                 etcd-pause-20210813204600-30853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         75s
	  kube-system                 kube-apiserver-pause-20210813204600-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-pause-20210813204600-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-4n8kb                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-scheduler-pause-20210813204600-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 storage-provisioner                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  98s (x6 over 98s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x5 over 98s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x5 over 98s)  kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientPID
	  Normal  Starting                 76s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s                kubelet     Node pause-20210813204600-30853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                74s                kubelet     Node pause-20210813204600-30853 status is now: NodeReady
	  Normal  Starting                 66s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000025] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +5.165176] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.050992] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.137498] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1726 comm=systemd-network
	[  +1.376463] vboxguest: loading out-of-tree module taints kernel.
	[  +0.007022] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.624786] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +20.400328] systemd-fstab-generator[2162]: Ignoring "noauto" for root device
	[  +0.134832] systemd-fstab-generator[2175]: Ignoring "noauto" for root device
	[  +0.282454] systemd-fstab-generator[2201]: Ignoring "noauto" for root device
	[  +6.552961] systemd-fstab-generator[2405]: Ignoring "noauto" for root device
	[Aug13 20:48] systemd-fstab-generator[2800]: Ignoring "noauto" for root device
	[ +13.894926] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.479825] kauditd_printk_skb: 80 callbacks suppressed
	[Aug13 20:49] kauditd_printk_skb: 14 callbacks suppressed
	[  +4.187207] systemd-fstab-generator[4013]: Ignoring "noauto" for root device
	[  +0.260965] systemd-fstab-generator[4026]: Ignoring "noauto" for root device
	[  +0.242550] systemd-fstab-generator[4048]: Ignoring "noauto" for root device
	[  +3.941917] systemd-fstab-generator[4299]: Ignoring "noauto" for root device
	[  +0.801138] systemd-fstab-generator[4353]: Ignoring "noauto" for root device
	[  +1.042940] systemd-fstab-generator[4407]: Ignoring "noauto" for root device
	[  +7.458737] systemd-fstab-generator[4909]: Ignoring "noauto" for root device
	[  +0.592251] systemd-fstab-generator[4937]: Ignoring "noauto" for root device
	[  +1.206597] NFSD: Unable to end grace period: -110
	
	* 
	* ==> etcd [ac4bf726a8a57d01c18c34384b2c09b892098bbfadc2ba567752668c63d36dcf] <==
	* 2021-08-13 20:48:01.922733 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:48:01.952757 I | embed: serving client requests on 192.168.39.61:2379
	2021-08-13 20:48:01.954160 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:48:01.975055 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:48:12.629799 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (355.071918ms) to execute
	2021-08-13 20:48:18.621673 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" " with result "range_response_count:0 size:5" took too long (1.837036221s) to execute
	2021-08-13 20:48:18.622362 W | wal: sync duration of 1.607346013s, expected less than 1s
	2021-08-13 20:48:18.623060 W | etcdserver: request "header:<ID:12771218163585540132 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-20210813204600-30853.169af8bae7fa23bf\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-20210813204600-30853.169af8bae7fa23bf\" value_size:632 lease:3547846126730764118 >> failure:<>>" with result "size:16" took too long (1.606807479s) to execute
	2021-08-13 20:48:18.624926 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.461501725s) to execute
	2021-08-13 20:48:18.628021 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210813204600-30853\" " with result "range_response_count:1 size:3982" took too long (1.370325429s) to execute
	2021-08-13 20:48:21.346921 W | wal: sync duration of 1.299304523s, expected less than 1s
	2021-08-13 20:48:21.347401 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.068677828s) to execute
	2021-08-13 20:48:24.481477 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:26.500706 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813204600-30853\" " with result "range_response_count:1 size:6093" took too long (233.724165ms) to execute
	2021-08-13 20:48:26.501137 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-gm2bv\" " with result "range_response_count:1 size:4473" took too long (378.683681ms) to execute
	2021-08-13 20:48:26.502059 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-4grvm\" " with result "range_response_count:1 size:4461" took too long (270.883259ms) to execute
	2021-08-13 20:48:28.869625 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:38.868019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:48.868044 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:48:58.870803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:49:00.399177 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (1.157615469s) to execute
	2021-08-13 20:49:00.400612 W | etcdserver: request "header:<ID:12771218163585540646 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" mod_revision:468 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" value_size:584 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-20210813204600-30853\" > >>" with result "size:16" took too long (200.747119ms) to execute
	2021-08-13 20:49:00.400917 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813204600-30853\" " with result "range_response_count:1 size:6093" took too long (1.158534213s) to execute
	2021-08-13 20:49:00.401297 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (569.698ms) to execute
	2021-08-13 20:49:08.868736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:49:32 up 2 min,  0 users,  load average: 1.59, 0.77, 0.29
	Linux pause-20210813204600-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [82d4de99d88e5db75cd5ac9b4f99ddc47243725d0f253680813e2c5fa7d5605b] <==
	* Trace[1175388272]: [1.383273804s] [1.383273804s] END
	I0813 20:48:18.647776       1 trace.go:205] Trace[1480647024]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.255) (total time: 1391ms):
	Trace[1480647024]: ---"Object stored in database" 1379ms (20:48:00.638)
	Trace[1480647024]: [1.391864844s] [1.391864844s] END
	I0813 20:48:18.651341       1 trace.go:205] Trace[532588033]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.256) (total time: 1395ms):
	Trace[532588033]: [1.395160654s] [1.395160654s] END
	I0813 20:48:18.651913       1 trace.go:205] Trace[486245217]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.39.61,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:17.256) (total time: 1395ms):
	Trace[486245217]: [1.395849853s] [1.395849853s] END
	I0813 20:48:18.659173       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:48:21.348539       1 trace.go:205] Trace[264690694]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:48:20.278) (total time: 1070ms):
	Trace[264690694]: [1.070400996s] [1.070400996s] END
	I0813 20:48:22.995388       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:48:23.545730       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:48:37.713151       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:48:37.713388       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:48:37.713410       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:49:00.401993       1 trace.go:205] Trace[875370503]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:48:59.240) (total time: 1161ms):
	Trace[875370503]: ---"About to write a response" 1161ms (20:49:00.401)
	Trace[875370503]: [1.161749328s] [1.161749328s] END
	I0813 20:49:00.403705       1 trace.go:205] Trace[1375945297]: "Get" url:/api/v1/nodes/pause-20210813204600-30853,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.39.1,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:48:59.241) (total time: 1162ms):
	Trace[1375945297]: ---"About to write a response" 1161ms (20:49:00.403)
	Trace[1375945297]: [1.162052238s] [1.162052238s] END
	I0813 20:49:08.639766       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:49:08.639943       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:49:08.639963       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [83df9633ff35249481c120eab058a8fdd8d33a47c11226b4b47b4880508a6659] <==
	* I0813 20:48:22.670523       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0813 20:48:22.676047       1 shared_informer.go:247] Caches are synced for job 
	I0813 20:48:22.676648       1 shared_informer.go:247] Caches are synced for GC 
	I0813 20:48:22.680632       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0813 20:48:22.680827       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0813 20:48:22.713877       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0813 20:48:22.743162       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:48:22.743798       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:48:22.849717       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0813 20:48:22.888695       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:48:22.888733       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:48:22.923738       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:48:22.923844       1 disruption.go:371] Sending events to api server.
	I0813 20:48:22.939921       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:48:23.006118       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4n8kb"
	E0813 20:48:23.080425       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"4ec5a127-3b2a-4f66-8321-f0bab85709c0", ResourceVersion:"304", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764484491, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000abfda0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000abfdb8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc0014a9280), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00142b740), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000abf
dd0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000abfde8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014a92c0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001419440), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00144e5a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000843e30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00163c430)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00144e608)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0813 20:48:23.316478       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:48:23.352329       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:48:23.352427       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:48:23.554638       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:48:23.583893       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:48:23.645559       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-gm2bv"
	I0813 20:48:23.652683       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-4grvm"
	I0813 20:48:23.772425       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-gm2bv"
	
	* 
	* ==> kube-proxy [2e50c328d71043a1e625771534f9d2aa9765226389a8fbf59f388c3203cb0164] <==
	* I0813 20:48:26.523023       1 node.go:172] Successfully retrieved node IP: 192.168.39.61
	I0813 20:48:26.523578       1 server_others.go:140] Detected node IP 192.168.39.61
	W0813 20:48:26.523867       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:48:26.597173       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:48:26.597466       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:48:26.597629       1 server_others.go:212] Using iptables Proxier.
	I0813 20:48:26.599876       1 server.go:643] Version: v1.21.3
	I0813 20:48:26.601871       1 config.go:315] Starting service config controller
	I0813 20:48:26.601925       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:48:26.601964       1 config.go:224] Starting endpoint slice config controller
	I0813 20:48:26.601993       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:48:26.626937       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:48:26.631306       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:48:26.702322       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:48:26.702322       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [66655950d3afae87c7471e266638064af971fd7ca6b6b84ad0519a60c9afefcf] <==
	* E0813 20:48:07.253858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:48:07.253939       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:48:07.254089       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:07.254299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:07.254407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:48:07.254763       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:48:07.256625       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:48:07.257805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:48:07.257988       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:48:07.258811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:48:07.259413       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:48:07.261132       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.091658       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:48:08.147159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:48:08.202089       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:48:08.257172       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:48:08.318956       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.416964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:48:08.426635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:48:08.429682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.498271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.623065       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:48:08.623400       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:08.652497       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0813 20:48:11.848968       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:47:17 UTC, end at Fri 2021-08-13 20:49:32 UTC. --
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.397743    4917 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.397782    4917 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.397793    4917 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.397803    4917 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.397879    4917 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.397942    4917 remote_runtime.go:62] parsed scheme: ""
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.397951    4917 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398002    4917 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398022    4917 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398319    4917 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398339    4917 remote_image.go:50] parsed scheme: ""
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398345    4917 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398368    4917 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398376    4917 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398465    4917 kubelet.go:404] "Attempting to sync node with API server"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398484    4917 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398529    4917 kubelet.go:283] "Adding apiserver pod source"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398543    4917 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.398824    4917 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.404419    4917 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="cri-o" version="1.20.2" apiVersion="v1alpha1"
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: E0813 20:49:28.719939    4917 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 13 20:49:28 pause-20210813204600-30853 kubelet[4917]: I0813 20:49:28.721488    4917 server.go:1190] "Started kubelet"
	Aug 13 20:49:28 pause-20210813204600-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:49:28 pause-20210813204600-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [10dab2af995787dd72ad622e9ca47f85a4174335cc5d2112353b4efb1d1683d5] <==
	* I0813 20:49:13.139876       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:49:13.163404       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:49:13.163867       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:49:13.184473       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:49:13.184758       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c!
	I0813 20:49:13.194291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e31d828-490b-41db-8431-f66bfdb15cd4", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c became leader
	I0813 20:49:13.286143       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210813204600-30853_1011eca7-0118-42ff-a309-02c0900c2c7c!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813204600-30853 -n pause-20210813204600-30853
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813204600-30853 -n pause-20210813204600-30853: exit status 2 (270.822276ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210813204600-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210813204600-30853 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210813204600-30853 describe pod : exit status 1 (60.34833ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210813204600-30853 describe pod : exit status 1
--- FAIL: TestPause/serial/PauseAgain (9.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (26.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210813205917-30853 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-20210813205917-30853 --alsologtostderr -v=1: exit status 80 (2.612343504s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-20210813205917-30853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 21:08:43.176076   12128 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:08:43.176911   12128 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:08:43.176929   12128 out.go:311] Setting ErrFile to fd 2...
	I0813 21:08:43.176936   12128 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:08:43.177099   12128 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:08:43.177331   12128 out.go:305] Setting JSON to false
	I0813 21:08:43.177363   12128 mustload.go:65] Loading cluster: embed-certs-20210813205917-30853
	I0813 21:08:43.177800   12128 config.go:177] Loaded profile config "embed-certs-20210813205917-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:08:43.178400   12128 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:43.178467   12128 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:43.189759   12128 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0813 21:08:43.190232   12128 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:43.190758   12128 main.go:130] libmachine: Using API Version  1
	I0813 21:08:43.190781   12128 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:43.191173   12128 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:43.191383   12128 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:43.194385   12128 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:43.194718   12128 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:43.194761   12128 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:43.205045   12128 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34781
	I0813 21:08:43.205554   12128 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:43.206036   12128 main.go:130] libmachine: Using API Version  1
	I0813 21:08:43.206062   12128 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:43.206411   12128 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:43.206569   12128 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:43.207466   12128 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-20210813205917-30853 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 21:08:43.210100   12128 out.go:177] * Pausing node embed-certs-20210813205917-30853 ... 
	I0813 21:08:43.210126   12128 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:43.210439   12128 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:43.210478   12128 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:43.221316   12128 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38513
	I0813 21:08:43.221678   12128 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:43.222100   12128 main.go:130] libmachine: Using API Version  1
	I0813 21:08:43.222121   12128 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:43.222451   12128 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:43.222650   12128 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:43.222869   12128 ssh_runner.go:149] Run: systemctl --version
	I0813 21:08:43.222899   12128 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:43.227824   12128 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:43.228241   12128 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:43.228274   12128 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:43.228408   12128 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:43.228591   12128 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:43.228736   12128 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:43.228864   12128 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:43.329316   12128 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:08:43.340184   12128 pause.go:50] kubelet running: true
	I0813 21:08:43.340257   12128 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:08:43.645463   12128 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:08:43.645566   12128 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:08:43.768478   12128 cri.go:76] found id: "1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce"
	I0813 21:08:43.768511   12128 cri.go:76] found id: "2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca"
	I0813 21:08:43.768518   12128 cri.go:76] found id: "cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945"
	I0813 21:08:43.768530   12128 cri.go:76] found id: "d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982"
	I0813 21:08:43.768543   12128 cri.go:76] found id: "8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a"
	I0813 21:08:43.768549   12128 cri.go:76] found id: "9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50"
	I0813 21:08:43.768554   12128 cri.go:76] found id: "43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b"
	I0813 21:08:43.768560   12128 cri.go:76] found id: "1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271"
	I0813 21:08:43.768568   12128 cri.go:76] found id: "91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f"
	I0813 21:08:43.768578   12128 cri.go:76] found id: ""
	I0813 21:08:43.768625   12128 ssh_runner.go:149] Run: sudo runc list -f json

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p embed-certs-20210813205917-30853 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813205917-30853 -n embed-certs-20210813205917-30853
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813205917-30853 -n embed-certs-20210813205917-30853: exit status 2 (253.948669ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210813205917-30853 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p embed-certs-20210813205917-30853 logs -n 25: exit status 110 (11.227375813s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p bridge-20210813204703-30853                    | bridge-20210813204703-30853                     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:03 UTC | Fri, 13 Aug 2021 20:59:00 UTC |
	|         | --memory=2048                                     |                                                 |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                                 |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                                 |         |         |                               |                               |
	|         | --cni=bridge --driver=kvm2                        |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	| ssh     | -p bridge-20210813204703-30853                    | bridge-20210813204703-30853                     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:00 UTC | Fri, 13 Aug 2021 20:59:00 UTC |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                               |                               |
	| ssh     | -p                                                | flannel-20210813204703-30853                    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:03 UTC | Fri, 13 Aug 2021 20:59:03 UTC |
	|         | flannel-20210813204703-30853                      |                                                 |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                               |                               |
	| delete  | -p bridge-20210813204703-30853                    | bridge-20210813204703-30853                     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:14 UTC | Fri, 13 Aug 2021 20:59:15 UTC |
	| delete  | -p                                                | flannel-20210813204703-30853                    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:15 UTC | Fri, 13 Aug 2021 20:59:17 UTC |
	|         | flannel-20210813204703-30853                      |                                                 |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:23 UTC | Fri, 13 Aug 2021 21:00:44 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:56 UTC | Fri, 13 Aug 2021 21:00:57 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:57 UTC | Fri, 13 Aug 2021 21:01:00 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:00 UTC | Fri, 13 Aug 2021 21:01:00 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210813204600-30853         | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:01 UTC | Fri, 13 Aug 2021 21:01:02 UTC |
	|         | kubernetes-upgrade-20210813204600-30853           |                                                 |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210813210102-30853      | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:02 UTC | Fri, 13 Aug 2021 21:01:02 UTC |
	|         | disable-driver-mounts-20210813210102-30853        |                                                 |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:17 UTC | Fri, 13 Aug 2021 21:01:05 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:18 UTC | Fri, 13 Aug 2021 21:01:19 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:19 UTC | Fri, 13 Aug 2021 21:01:23 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:23 UTC | Fri, 13 Aug 2021 21:01:23 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:15 UTC | Fri, 13 Aug 2021 21:02:15 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:27 UTC | Fri, 13 Aug 2021 21:02:28 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:02 UTC | Fri, 13 Aug 2021 21:03:15 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                 |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio           |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:26 UTC | Fri, 13 Aug 2021 21:03:27 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:27 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:28 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:32 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:23 UTC | Fri, 13 Aug 2021 21:08:32 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:08:42 UTC | Fri, 13 Aug 2021 21:08:43 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                 |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:03:32
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:03:32.257678   11600 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:03:32.257760   11600 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:03:32.257764   11600 out.go:311] Setting ErrFile to fd 2...
	I0813 21:03:32.257767   11600 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:03:32.257889   11600 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:03:32.258149   11600 out.go:305] Setting JSON to false
	I0813 21:03:32.297164   11600 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":9974,"bootTime":1628878638,"procs":184,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:03:32.297442   11600 start.go:121] virtualization: kvm guest
	I0813 21:03:32.300208   11600 out.go:177] * [no-preload-20210813205915-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:03:32.301763   11600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:03:32.300370   11600 notify.go:169] Checking for updates...
	I0813 21:03:32.303324   11600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:03:32.304875   11600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:03:32.306390   11600 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:03:32.306988   11600 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:03:32.307576   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:03:32.307638   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:03:32.319235   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34929
	I0813 21:03:32.319644   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:03:32.320320   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:03:32.320347   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:03:32.320748   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:03:32.320979   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:03:32.321189   11600 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:03:32.321646   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:03:32.321692   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:03:32.332966   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45825
	I0813 21:03:32.333332   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:03:32.333819   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:03:32.333847   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:03:32.334199   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:03:32.334372   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:03:32.365034   11600 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 21:03:32.365061   11600 start.go:278] selected driver: kvm2
	I0813 21:03:32.365067   11600 start.go:751] validating driver "kvm2" against &{Name:no-preload-20210813205915-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.107 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:03:32.365197   11600 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:03:32.367047   11600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.367426   11600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:03:32.378154   11600 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:03:32.378447   11600 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 21:03:32.378474   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:03:32.378482   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:03:32.378489   11600 start_flags.go:277] config:
	{Name:no-preload-20210813205915-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.107 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Li
stenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:03:32.378585   11600 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:30.512688   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:33.010993   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:32.670472   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:35.171315   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:30.963285   11447 out.go:177] * Restarting existing kvm2 VM for "default-k8s-different-port-20210813210102-30853" ...
	I0813 21:03:30.963310   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Start
	I0813 21:03:30.963467   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Ensuring networks are active...
	I0813 21:03:30.965431   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Ensuring network default is active
	I0813 21:03:30.965733   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Ensuring network mk-default-k8s-different-port-20210813210102-30853 is active
	I0813 21:03:30.966083   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Getting domain xml...
	I0813 21:03:30.968061   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Creating domain...
	I0813 21:03:31.416170   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Waiting to get IP...
	I0813 21:03:31.417365   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.418005   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has current primary IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.418042   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Found IP for machine: 192.168.50.136
	I0813 21:03:31.418064   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Reserving static IP address...
	I0813 21:03:31.418520   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "default-k8s-different-port-20210813210102-30853", mac: "52:54:00:37:ca:98", ip: "192.168.50.136"} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:01:32 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:31.418572   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | skip adding static IP to network mk-default-k8s-different-port-20210813210102-30853 - found existing host DHCP lease matching {name: "default-k8s-different-port-20210813210102-30853", mac: "52:54:00:37:ca:98", ip: "192.168.50.136"}
	I0813 21:03:31.418592   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Reserved static IP address: 192.168.50.136
	I0813 21:03:31.418609   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Waiting for SSH to be available...
	I0813 21:03:31.418628   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Getting to WaitForSSH function...
	I0813 21:03:31.424645   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.425050   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:01:32 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:31.425182   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.425389   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH client type: external
	I0813 21:03:31.425422   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa (-rw-------)
	I0813 21:03:31.425464   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:03:31.425482   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | About to run SSH command:
	I0813 21:03:31.425509   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | exit 0
	I0813 21:03:32.380458   11600 out.go:177] * Starting control plane node no-preload-20210813205915-30853 in cluster no-preload-20210813205915-30853
	I0813 21:03:32.380479   11600 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:03:32.380628   11600 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/config.json ...
	I0813 21:03:32.380658   11600 cache.go:108] acquiring lock: {Name:mkb38baead8d508ff836651dee18a7788cf32c81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380644   11600 cache.go:108] acquiring lock: {Name:mk46180cf67d5c541fa2597ef8e0122b51c3d66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380670   11600 cache.go:108] acquiring lock: {Name:mk7bb3b696fd3372110b0be599d95315e027c7ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380696   11600 cache.go:108] acquiring lock: {Name:mkf1d6f5d79a8fed4d2cc99505f5f3464b88e46a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380719   11600 cache.go:108] acquiring lock: {Name:mk828c96511ca39b5ec24da9b6afedd4727bdcf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380743   11600 cache.go:108] acquiring lock: {Name:mk03e6bcc333bfad143239419641099a94fed11e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380784   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 21:03:32.380790   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0813 21:03:32.380787   11600 cache.go:108] acquiring lock: {Name:mk928ab7caca14c2ebd27b364dc38d466ea61870 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380747   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0813 21:03:32.380809   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 21:03:32.380803   11600 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 161.844µs
	I0813 21:03:32.380822   11600 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 21:03:32.380808   11600 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 149.17µs
	I0813 21:03:32.380819   11600 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 164.006µs
	I0813 21:03:32.380839   11600 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0813 21:03:32.380837   11600 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:03:32.380848   11600 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0813 21:03:32.380801   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0813 21:03:32.380838   11600 cache.go:108] acquiring lock: {Name:mk3d501986e0e48ddd0db3c6e93347910f1116e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380854   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0813 21:03:32.380853   11600 cache.go:108] acquiring lock: {Name:mkf7939d465d516c835d7d7703c105943f1ade9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380867   11600 start.go:313] acquiring machines lock for no-preload-20210813205915-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:03:32.380868   11600 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 155.968µs
	I0813 21:03:32.380881   11600 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0813 21:03:32.380876   11600 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 155.847µs
	I0813 21:03:32.380760   11600 cache.go:108] acquiring lock: {Name:mkec6e53ab9796f80ec65d6b99a6c3ee881fedd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380890   11600 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380896   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0813 21:03:32.380899   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0813 21:03:32.380841   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0813 21:03:32.380909   11600 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 73.516µs
	I0813 21:03:32.380913   11600 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 62.387µs
	I0813 21:03:32.380921   11600 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380939   11600 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380925   11600 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 136.425µs
	I0813 21:03:32.380966   11600 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380936   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 21:03:32.380982   11600 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 225.197µs
	I0813 21:03:32.380995   11600 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 21:03:32.380828   11600 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.9µs
	I0813 21:03:32.381004   11600 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 21:03:32.381012   11600 cache.go:88] Successfully saved all images to host disk.
	I0813 21:03:35.012590   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:37.514197   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:37.669098   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:40.168374   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:40.013348   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:42.014535   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:42.670990   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:44.671751   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:43.440320   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | SSH cmd err, output: exit status 255: 
	I0813 21:03:43.440353   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0813 21:03:43.440363   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | command : exit 0
	I0813 21:03:43.440369   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | err     : exit status 255
	I0813 21:03:43.440381   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | output  : 
	I0813 21:03:47.896090   11600 start.go:317] acquired machines lock for "no-preload-20210813205915-30853" in 15.515202861s
	I0813 21:03:47.896143   11600 start.go:93] Skipping create...Using existing machine configuration
	I0813 21:03:47.896154   11600 fix.go:55] fixHost starting: 
	I0813 21:03:47.896500   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:03:47.896553   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:03:47.909531   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37953
	I0813 21:03:47.909942   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:03:47.910569   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:03:47.910588   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:03:47.910953   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:03:47.911154   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:03:47.911327   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:03:47.913763   11600 fix.go:108] recreateIfNeeded on no-preload-20210813205915-30853: state=Stopped err=<nil>
	I0813 21:03:47.913791   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	W0813 21:03:47.913946   11600 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 21:03:44.511774   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:46.514028   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:48.515447   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:47.170765   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:49.174655   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:46.440683   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Getting to WaitForSSH function...
	I0813 21:03:46.445948   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.446304   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.446340   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.446496   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH client type: external
	I0813 21:03:46.446533   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa (-rw-------)
	I0813 21:03:46.446579   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:03:46.446601   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | About to run SSH command:
	I0813 21:03:46.446618   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | exit 0
	I0813 21:03:46.582984   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 21:03:46.583312   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetConfigRaw
	I0813 21:03:46.584076   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:03:46.589266   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.589559   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.589588   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.589810   11447 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/config.json ...
	I0813 21:03:46.590017   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:46.590212   11447 machine.go:88] provisioning docker machine ...
	I0813 21:03:46.590232   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:46.590407   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetMachineName
	I0813 21:03:46.590545   11447 buildroot.go:166] provisioning hostname "default-k8s-different-port-20210813210102-30853"
	I0813 21:03:46.590576   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetMachineName
	I0813 21:03:46.590701   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.595270   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.595544   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.595577   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.595711   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:46.595884   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.596013   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.596117   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:46.596285   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:46.596463   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:46.596478   11447 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210813210102-30853 && echo "default-k8s-different-port-20210813210102-30853" | sudo tee /etc/hostname
	I0813 21:03:46.733223   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210813210102-30853
	
	I0813 21:03:46.733252   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.739002   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.739323   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.739359   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.739481   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:46.739690   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.739849   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.739990   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:46.740161   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:46.740320   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:46.740349   11447 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210813210102-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210813210102-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210813210102-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:03:46.872322   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:03:46.872366   11447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:03:46.872403   11447 buildroot.go:174] setting up certificates
	I0813 21:03:46.872413   11447 provision.go:83] configureAuth start
	I0813 21:03:46.872433   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetMachineName
	I0813 21:03:46.872715   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:03:46.878075   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.878404   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.878459   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.878540   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.882767   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.883077   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.883108   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.883225   11447 provision.go:138] copyHostCerts
	I0813 21:03:46.883299   11447 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:03:46.883314   11447 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:03:46.883398   11447 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:03:46.883517   11447 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:03:46.883530   11447 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:03:46.883563   11447 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:03:46.883642   11447 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:03:46.883654   11447 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:03:46.883682   11447 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 21:03:46.883763   11447 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210813210102-30853 san=[192.168.50.136 192.168.50.136 localhost 127.0.0.1 minikube default-k8s-different-port-20210813210102-30853]
	I0813 21:03:46.987158   11447 provision.go:172] copyRemoteCerts
	I0813 21:03:46.987214   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:03:46.987238   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.992216   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.992440   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.992475   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.992656   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:46.992817   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.992969   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:46.993066   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:47.083216   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0813 21:03:47.100865   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:03:47.117328   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:03:47.134074   11447 provision.go:86] duration metric: configureAuth took 261.642322ms
	I0813 21:03:47.134094   11447 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:03:47.134262   11447 config.go:177] Loaded profile config "default-k8s-different-port-20210813210102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:03:47.134353   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.139472   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.139780   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.139807   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.139944   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.140097   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.140275   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.140411   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.140599   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:47.140769   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:47.140790   11447 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 21:03:47.633895   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 21:03:47.633930   11447 machine.go:91] provisioned docker machine in 1.043703131s
	I0813 21:03:47.633942   11447 start.go:267] post-start starting for "default-k8s-different-port-20210813210102-30853" (driver="kvm2")
	I0813 21:03:47.633950   11447 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:03:47.633971   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.634293   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:03:47.634328   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.639277   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.639636   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.639663   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.639786   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.639947   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.640111   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.640242   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:47.734400   11447 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:03:47.740052   11447 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:03:47.740071   11447 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:03:47.740130   11447 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:03:47.740231   11447 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 21:03:47.740344   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:03:47.747174   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:03:47.764416   11447 start.go:270] post-start completed in 130.462296ms
	I0813 21:03:47.764450   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.764711   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.770040   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.770384   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.770431   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.770530   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.770719   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.770894   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.771070   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.771253   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:47.771444   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:47.771459   11447 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:03:47.895861   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888627.837623344
	
	I0813 21:03:47.895892   11447 fix.go:212] guest clock: 1628888627.837623344
	I0813 21:03:47.895903   11447 fix.go:225] Guest: 2021-08-13 21:03:47.837623344 +0000 UTC Remote: 2021-08-13 21:03:47.764694239 +0000 UTC m=+16.980843358 (delta=72.929105ms)
	I0813 21:03:47.895929   11447 fix.go:196] guest clock delta is within tolerance: 72.929105ms
	I0813 21:03:47.895937   11447 fix.go:57] fixHost completed within 16.950003029s
	I0813 21:03:47.895942   11447 start.go:80] releasing machines lock for "default-k8s-different-port-20210813210102-30853", held for 16.950031669s
	I0813 21:03:47.896001   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.896297   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:03:47.901493   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.901838   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.901870   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.902050   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.902228   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.902715   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.902976   11447 ssh_runner.go:149] Run: systemctl --version
	I0813 21:03:47.902995   11447 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:03:47.903007   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.903040   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.909125   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.909422   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.909452   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.909630   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.909813   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.909935   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.910059   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:47.910088   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.910489   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.910527   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.910654   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.910777   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.910927   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.911072   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:48.006087   11447 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 21:03:48.006215   11447 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:03:47.916188   11600 out.go:177] * Restarting existing kvm2 VM for "no-preload-20210813205915-30853" ...
	I0813 21:03:47.916218   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Start
	I0813 21:03:47.916374   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring networks are active...
	I0813 21:03:47.918363   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring network default is active
	I0813 21:03:47.918666   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring network mk-no-preload-20210813205915-30853 is active
	I0813 21:03:47.919177   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Getting domain xml...
	I0813 21:03:47.921207   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Creating domain...
	I0813 21:03:48.385941   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Waiting to get IP...
	I0813 21:03:48.387086   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.387686   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Found IP for machine: 192.168.105.107
	I0813 21:03:48.387718   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Reserving static IP address...
	I0813 21:03:48.387738   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has current primary IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.388204   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "no-preload-20210813205915-30853", mac: "52:54:00:60:d2:3d", ip: "192.168.105.107"} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 21:59:33 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:03:48.388236   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Reserved static IP address: 192.168.105.107
	I0813 21:03:48.388276   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | skip adding static IP to network mk-no-preload-20210813205915-30853 - found existing host DHCP lease matching {name: "no-preload-20210813205915-30853", mac: "52:54:00:60:d2:3d", ip: "192.168.105.107"}
	I0813 21:03:48.388306   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Getting to WaitForSSH function...
	I0813 21:03:48.388326   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Waiting for SSH to be available...
	I0813 21:03:48.393946   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.394418   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 21:59:33 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:03:48.394445   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.394706   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH client type: external
	I0813 21:03:48.394790   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa (-rw-------)
	I0813 21:03:48.394865   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:03:48.394885   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | About to run SSH command:
	I0813 21:03:48.394902   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | exit 0
	I0813 21:03:51.014322   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:53.517299   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:51.667636   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:53.672798   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:52.032310   11447 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.026067051s)
	I0813 21:03:52.032472   11447 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 21:03:52.032533   11447 ssh_runner.go:149] Run: which lz4
	I0813 21:03:52.036917   11447 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:03:52.041879   11447 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:03:52.041911   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0813 21:03:54.836023   11447 crio.go:362] Took 2.799141 seconds to copy over tarball
	I0813 21:03:54.836104   11447 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:03:56.016199   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:58.747725   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:56.174092   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:58.745387   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:57.599639   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | SSH cmd err, output: exit status 255: 
	I0813 21:03:58.136181   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0813 21:03:58.136210   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | command : exit 0
	I0813 21:03:58.136247   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | err     : exit status 255
	I0813 21:03:58.136301   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | output  : 
	I0813 21:04:00.599792   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Getting to WaitForSSH function...
	I0813 21:04:00.606127   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:00.606561   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:00.606599   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:00.606684   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH client type: external
	I0813 21:04:00.606710   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa (-rw-------)
	I0813 21:04:00.606759   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:04:00.606779   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | About to run SSH command:
	I0813 21:04:00.606791   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | exit 0
	I0813 21:04:01.865012   11447 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (7.028876371s)
	I0813 21:04:01.865051   11447 crio.go:369] Took 7.028990 seconds t extract the tarball
	I0813 21:04:01.865065   11447 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:04:01.909459   11447 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 21:04:01.921741   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 21:04:01.931836   11447 docker.go:153] disabling docker service ...
	I0813 21:04:01.931885   11447 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:04:01.943769   11447 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:04:01.957001   11447 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:04:02.141489   11447 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:04:02.286672   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:04:02.301487   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:04:02.316482   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 21:04:02.324481   11447 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:04:02.332086   11447 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:04:02.332135   11447 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:04:02.348397   11447 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:04:02.355704   11447 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:04:02.519419   11447 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 21:04:02.853377   11447 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 21:04:02.853455   11447 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 21:04:02.859109   11447 start.go:413] Will wait 60s for crictl version
	I0813 21:04:02.859179   11447 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:04:02.895788   11447 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 21:04:02.895871   11447 ssh_runner.go:149] Run: crio --version
	I0813 21:04:02.973856   11447 ssh_runner.go:149] Run: crio --version
	I0813 21:04:01.014560   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:03.513509   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:01.169481   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:04.824663   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:04.802040   11447 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 21:04:04.802102   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:04:04.808733   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:04:04.809248   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:04:04.809286   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:04:04.809574   11447 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0813 21:04:04.815288   11447 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:04.828595   11447 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 21:04:04.828664   11447 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:04:04.877574   11447 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:04:04.877604   11447 crio.go:333] Images already preloaded, skipping extraction
	I0813 21:04:04.877660   11447 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:04:04.914222   11447 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:04:04.914249   11447 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:04:04.914336   11447 ssh_runner.go:149] Run: crio config
	I0813 21:04:05.157389   11447 cni.go:93] Creating CNI manager for ""
	I0813 21:04:05.157412   11447 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:04:05.157424   11447 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:04:05.157439   11447 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.136 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210813210102-30853 NodeName:default-k8s-different-port-20210813210102-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.136
CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:04:05.157622   11447 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.136
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "default-k8s-different-port-20210813210102-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:04:05.157727   11447 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=default-k8s-different-port-20210813210102-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.136 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813210102-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0813 21:04:05.157774   11447 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 21:04:05.167087   11447 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:04:05.167155   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:04:05.175473   11447 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (528 bytes)
	I0813 21:04:05.188753   11447 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 21:04:05.201467   11447 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0813 21:04:05.215461   11447 ssh_runner.go:149] Run: grep 192.168.50.136	control-plane.minikube.internal$ /etc/hosts
	I0813 21:04:05.220200   11447 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:05.231726   11447 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853 for IP: 192.168.50.136
	I0813 21:04:05.231797   11447 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:04:05.231825   11447 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:04:05.231898   11447 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.key
	I0813 21:04:05.231928   11447 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/apiserver.key.cb5546de
	I0813 21:04:05.231952   11447 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/proxy-client.key
	I0813 21:04:05.232111   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 21:04:05.232165   11447 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 21:04:05.232188   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:04:05.232232   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:04:05.232271   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:04:05.232307   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 21:04:05.232379   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:04:05.233804   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:04:05.253715   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:04:05.273351   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:04:05.290830   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 21:04:05.308416   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:04:05.326529   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 21:04:05.346664   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:04:05.364492   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 21:04:05.381949   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:04:05.399680   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 21:04:05.419759   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 21:04:05.438209   11447 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:04:05.450680   11447 ssh_runner.go:149] Run: openssl version
	I0813 21:04:05.457245   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:04:05.465670   11447 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:05.470976   11447 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:05.471018   11447 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:05.477477   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:04:05.486446   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 21:04:05.494612   11447 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 21:04:05.499391   11447 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 21:04:05.499438   11447 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 21:04:05.505622   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 21:04:05.514421   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 21:04:05.523408   11447 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 21:04:05.528337   11447 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 21:04:05.528382   11447 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 21:04:05.535765   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:04:05.544593   11447 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210813210102-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813210102-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.136 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:tr
ue system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:04:05.544684   11447 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 21:04:05.544726   11447 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:05.585256   11447 cri.go:76] found id: ""
	I0813 21:04:05.585334   11447 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:04:05.593681   11447 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:04:05.593711   11447 kubeadm.go:600] restartCluster start
	I0813 21:04:05.593760   11447 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:04:05.602117   11447 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:05.603061   11447 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210813210102-30853" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:04:05.603385   11447 kubeconfig.go:128] "default-k8s-different-port-20210813210102-30853" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:04:05.604147   11447 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:04:05.606733   11447 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:04:05.614257   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:05.614297   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:05.624492   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:02.775071   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 21:04:02.775420   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetConfigRaw
	I0813 21:04:02.776115   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:02.782201   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.782674   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:02.782712   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.783141   11600 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/config.json ...
	I0813 21:04:02.783367   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:02.783571   11600 machine.go:88] provisioning docker machine ...
	I0813 21:04:02.783598   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:02.783770   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetMachineName
	I0813 21:04:02.783946   11600 buildroot.go:166] provisioning hostname "no-preload-20210813205915-30853"
	I0813 21:04:02.783971   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetMachineName
	I0813 21:04:02.784147   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:02.789849   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.790287   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:02.790320   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.790441   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:02.790578   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.790777   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.790928   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:02.791095   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:02.791315   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:02.791336   11600 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210813205915-30853 && echo "no-preload-20210813205915-30853" | sudo tee /etc/hostname
	I0813 21:04:02.946559   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210813205915-30853
	
	I0813 21:04:02.946596   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:02.952957   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.953358   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:02.953393   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.953568   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:02.953745   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.953960   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.954167   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:02.954385   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:02.954624   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:02.954665   11600 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210813205915-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210813205915-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210813205915-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:04:03.094292   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:04:03.094324   11600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:04:03.094356   11600 buildroot.go:174] setting up certificates
	I0813 21:04:03.094369   11600 provision.go:83] configureAuth start
	I0813 21:04:03.094384   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetMachineName
	I0813 21:04:03.094688   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:03.100354   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.100706   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.100739   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.100946   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:03.105867   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.106237   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.106310   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.106463   11600 provision.go:138] copyHostCerts
	I0813 21:04:03.106530   11600 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:04:03.106543   11600 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:04:03.106590   11600 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:04:03.106682   11600 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:04:03.106693   11600 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:04:03.106720   11600 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:04:03.106783   11600 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:04:03.106793   11600 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:04:03.106815   11600 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 21:04:03.106882   11600 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210813205915-30853 san=[192.168.105.107 192.168.105.107 localhost 127.0.0.1 minikube no-preload-20210813205915-30853]
	I0813 21:04:03.232637   11600 provision.go:172] copyRemoteCerts
	I0813 21:04:03.232735   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:04:03.232781   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:03.238750   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.239227   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.239262   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.239441   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:03.239634   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:03.239802   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:03.239979   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:03.330067   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:04:03.347432   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 21:04:03.580187   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:04:03.733835   11600 provision.go:86] duration metric: configureAuth took 639.447362ms
	I0813 21:04:03.733873   11600 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:04:03.734092   11600 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:04:03.734225   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:03.740654   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.741046   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.741091   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.741217   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:03.741420   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:03.741586   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:03.741748   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:03.741941   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:03.742078   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:03.742093   11600 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 21:04:04.399833   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 21:04:04.399867   11600 machine.go:91] provisioned docker machine in 1.616277375s
	I0813 21:04:04.399881   11600 start.go:267] post-start starting for "no-preload-20210813205915-30853" (driver="kvm2")
	I0813 21:04:04.399888   11600 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:04:04.399909   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.400282   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:04:04.400324   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.406533   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.406945   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.406987   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.407240   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.407441   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.407578   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.407746   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:04.498949   11600 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:04:04.503867   11600 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:04:04.503896   11600 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:04:04.503972   11600 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:04:04.504097   11600 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 21:04:04.504223   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:04:04.511733   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:04:04.528408   11600 start.go:270] post-start completed in 128.513758ms
	I0813 21:04:04.528443   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.528707   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.534254   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.534663   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.534695   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.534799   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.534987   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.535140   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.535279   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.535426   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:04.535597   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:04.535608   11600 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:04:04.663945   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888644.593571707
	
	I0813 21:04:04.663967   11600 fix.go:212] guest clock: 1628888644.593571707
	I0813 21:04:04.663974   11600 fix.go:225] Guest: 2021-08-13 21:04:04.593571707 +0000 UTC Remote: 2021-08-13 21:04:04.528687546 +0000 UTC m=+32.319635142 (delta=64.884161ms)
	I0813 21:04:04.663992   11600 fix.go:196] guest clock delta is within tolerance: 64.884161ms
	I0813 21:04:04.663998   11600 fix.go:57] fixHost completed within 16.76784432s
	I0813 21:04:04.664002   11600 start.go:80] releasing machines lock for "no-preload-20210813205915-30853", held for 16.76787935s
	I0813 21:04:04.664032   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.664301   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:04.670385   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.670693   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.670728   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.670905   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.671084   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.671497   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.671741   11600 ssh_runner.go:149] Run: systemctl --version
	I0813 21:04:04.671770   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.671781   11600 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:04:04.671828   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.677842   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.677920   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.678239   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.678271   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.678303   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.678327   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.678385   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.678537   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.678601   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.678680   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.678746   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.678799   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.678866   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:04.678918   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:04.778153   11600 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:04:04.778247   11600 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 21:04:04.790123   11600 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 21:04:04.799742   11600 docker.go:153] disabling docker service ...
	I0813 21:04:04.799795   11600 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:04:04.814660   11600 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:04:04.826371   11600 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:04:04.984940   11600 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:04:05.134330   11600 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:04:05.146967   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:04:05.162919   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 21:04:05.171969   11600 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:04:05.178773   11600 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:04:05.178830   11600 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:04:05.195828   11600 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:04:05.202754   11600 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:04:05.337419   11600 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 21:04:05.559682   11600 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 21:04:05.559752   11600 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 21:04:05.566062   11600 start.go:413] Will wait 60s for crictl version
	I0813 21:04:05.566138   11600 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:04:05.601921   11600 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 21:04:05.602001   11600 ssh_runner.go:149] Run: crio --version
	I0813 21:04:05.842661   11600 ssh_runner.go:149] Run: crio --version
	I0813 21:04:05.956395   11600 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.2 ...
	I0813 21:04:05.956450   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:05.962605   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:05.962975   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:05.962999   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:05.963185   11600 ssh_runner.go:149] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0813 21:04:05.968381   11600 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:05.979746   11600 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:04:05.979790   11600 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:04:06.037577   11600 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 21:04:06.037602   11600 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 k8s.gcr.io/kube-proxy:v1.22.0-rc.0 k8s.gcr.io/pause:3.4.1 k8s.gcr.io/etcd:3.4.13-3 k8s.gcr.io/coredns/coredns:v1.8.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0813 21:04:06.037684   11600 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 21:04:06.037756   11600 image.go:133] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.0
	I0813 21:04:06.037772   11600 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:06.037684   11600 image.go:133] retrieving image: k8s.gcr.io/pause:3.4.1
	I0813 21:04:06.037785   11600 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:06.037762   11600 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-3
	I0813 21:04:06.037738   11600 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:06.037735   11600 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:06.037741   11600 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 21:04:06.037767   11600 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:06.039362   11600 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.22.0-rc.0: Error response from daemon: reference does not exist
	I0813 21:04:06.053753   11600 image.go:171] found k8s.gcr.io/pause:3.4.1 locally: &{Image:0xc000d620e0}
	I0813 21:04:06.053840   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/pause:3.4.1
	I0813 21:04:06.454088   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:06.627170   11600 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc000a3e0e0}
	I0813 21:04:06.627262   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:06.677125   11600 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" does not exist at hash "ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c" in container runtime
	I0813 21:04:06.677177   11600 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:06.677243   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:06.772729   11600 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc000a3e3e0}
	I0813 21:04:06.772826   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 21:04:06.829141   11600 image.go:171] found k8s.gcr.io/coredns/coredns:v1.8.0 locally: &{Image:0xc00142e1e0}
	I0813 21:04:06.829237   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns/coredns:v1.8.0
	I0813 21:04:06.902889   11600 cache_images.go:106] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0813 21:04:06.902989   11600 cri.go:205] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:06.903035   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:06.902933   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:07.109713   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.109813   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:07.109896   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.117259   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0 (exists)
	I0813 21:04:07.117279   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.117314   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.171175   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0813 21:04:07.171310   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0813 21:04:05.516944   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:08.013394   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:07.172226   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:09.188184   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:05.824992   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:05.825077   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:05.837175   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.025601   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.025691   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.036326   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.225644   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.225742   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.238574   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.425637   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.425737   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.438316   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.625622   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.625698   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.643437   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.824708   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.824784   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.840790   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.024978   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.025048   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.042237   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.225613   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.225690   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.238533   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.424924   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.425004   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.437239   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.625345   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.625418   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.643925   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.825147   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.825246   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.839517   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.024742   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.024831   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.037540   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.224652   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.224733   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.237758   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.425032   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.425121   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.438563   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.624675   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.624790   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.640197   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.640219   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.640266   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.654071   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.654097   11447 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:04:08.654106   11447 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:04:08.654124   11447 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:04:08.654177   11447 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:08.717698   11447 cri.go:76] found id: ""
	I0813 21:04:08.717795   11447 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:04:08.753323   11447 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:04:08.778307   11447 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:04:08.778369   11447 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:08.800125   11447 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:08.800151   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:09.316586   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:10.438674   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.122049553s)
	I0813 21:04:10.438715   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:07.759123   11600 image.go:171] found k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 locally: &{Image:0xc000d620e0}
	I0813 21:04:07.759237   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:09.111081   11600 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 locally: &{Image:0xc00142e040}
	I0813 21:04:09.111212   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:09.462306   11600 image.go:171] found k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 locally: &{Image:0xc00142e140}
	I0813 21:04:09.462414   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:10.255823   11600 image.go:171] found k8s.gcr.io/etcd:3.4.13-3 locally: &{Image:0xc0012f0120}
	I0813 21:04:10.255916   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.13-3
	I0813 21:04:11.315708   11600 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc000d62460}
	I0813 21:04:11.315815   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0813 21:04:10.514963   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:12.516333   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:11.670913   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:14.171134   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:10.800884   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:10.992029   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:11.167449   11447 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:04:11.167518   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:11.684011   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:12.184677   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:12.684502   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:13.184162   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:13.684035   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:14.183991   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:14.683969   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:15.184603   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:15.684380   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:13.372670   11600 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (6.201329225s)
	I0813 21:04:13.372706   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0: (6.255368199s)
	I0813 21:04:13.372718   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0813 21:04:13.372732   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 from cache
	I0813 21:04:13.372728   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.22.0-rc.0: (5.613461548s)
	I0813 21:04:13.372758   11600 crio.go:191] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0813 21:04:13.372783   11600 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" does not exist at hash "7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75" in container runtime
	I0813 21:04:13.372830   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.22.0-rc.0: (3.910399102s)
	I0813 21:04:13.372858   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0813 21:04:13.372868   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.13-3: (3.116939311s)
	I0813 21:04:13.372873   11600 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" does not exist at hash "b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a" in container runtime
	I0813 21:04:13.372900   11600 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:13.372831   11600 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:13.372924   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0: (2.057095132s)
	I0813 21:04:13.372931   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:13.372936   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:13.372786   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0: (4.261556732s)
	I0813 21:04:13.373009   11600 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" does not exist at hash "cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c" in container runtime
	I0813 21:04:13.373032   11600 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:13.373056   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:13.381245   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:13.381490   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:15.288527   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.915644282s)
	I0813 21:04:15.288559   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0813 21:04:15.288601   11600 ssh_runner.go:189] Completed: which crictl: (1.91552977s)
	I0813 21:04:15.288660   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:15.288670   11600 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.22.0-rc.0: (1.907403335s)
	I0813 21:04:15.288709   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:15.288741   11600 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.22.0-rc.0: (1.90722818s)
	I0813 21:04:15.288782   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.288805   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:15.288858   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.323185   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:15.323264   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0 (exists)
	I0813 21:04:15.323283   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.323302   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:15.323314   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0 (exists)
	I0813 21:04:15.323320   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.329111   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0 (exists)
	I0813 21:04:15.011212   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:17.011691   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:16.670490   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:19.170343   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:16.184356   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:16.684936   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:17.184954   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:17.684681   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:18.184911   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:18.684242   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:19.184095   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:19.683984   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:20.184175   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:20.210489   11447 api_server.go:70] duration metric: took 9.043039811s to wait for apiserver process to appear ...
	I0813 21:04:20.210519   11447 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:04:20.210533   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:20.211291   11447 api_server.go:255] stopped: https://192.168.50.136:8444/healthz: Get "https://192.168.50.136:8444/healthz": dial tcp 192.168.50.136:8444: connect: connection refused
	I0813 21:04:20.711989   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:21.745565   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0: (6.422201905s)
	I0813 21:04:21.745599   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 from cache
	I0813 21:04:21.745635   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:21.745691   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:19.017281   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:21.514778   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:23.515219   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:21.171057   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:23.670243   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:25.713040   11447 api_server.go:255] stopped: https://192.168.50.136:8444/healthz: Get "https://192.168.50.136:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:04:24.199550   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0: (2.45382894s)
	I0813 21:04:24.199592   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 from cache
	I0813 21:04:24.199629   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:24.199702   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:26.212134   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:26.605510   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:04:26.605545   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:04:26.711743   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:27.047887   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:04:27.047925   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:04:27.212219   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:27.218272   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:04:27.218303   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:04:27.711515   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:27.725621   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:04:27.725665   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:04:28.212046   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:28.224546   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 200:
	ok
	I0813 21:04:28.234553   11447 api_server.go:139] control plane version: v1.21.3
	I0813 21:04:28.234579   11447 api_server.go:129] duration metric: took 8.024053155s to wait for apiserver health ...
	I0813 21:04:28.234595   11447 cni.go:93] Creating CNI manager for ""
	I0813 21:04:28.234616   11447 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:04:26.019080   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:28.516769   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:25.670866   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:27.671923   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:30.171118   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:28.236904   11447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:04:28.236969   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:04:28.252383   11447 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:04:28.300743   11447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:04:28.320179   11447 system_pods.go:59] 8 kube-system pods found
	I0813 21:04:28.320225   11447 system_pods.go:61] "coredns-558bd4d5db-v2sv5" [3b82b811-5e28-41dc-b0e1-71233efc654e] Running
	I0813 21:04:28.320234   11447 system_pods.go:61] "etcd-default-k8s-different-port-20210813210102-30853" [89cff97c-ff5c-4920-a05f-1ec7b313043b] Running
	I0813 21:04:28.320241   11447 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813210102-30853" [734380ac-398d-4b51-a67f-aaac2457110c] Running
	I0813 21:04:28.320252   11447 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813210102-30853" [ebc5d291-624f-4c49-b9cb-436204a7665a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 21:04:28.320261   11447 system_pods.go:61] "kube-proxy-99cxm" [a1bfba1d-d9fb-4d24-abe9-fd0522c591f0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 21:04:28.320271   11447 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813210102-30853" [b66e01ad-943e-4a2c-aabe-d18f92fd5eb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0813 21:04:28.320290   11447 system_pods.go:61] "metrics-server-7c784ccb57-xfj59" [b522ac66-040a-4030-a817-c422c703b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:04:28.320308   11447 system_pods.go:61] "storage-provisioner" [d59ea453-ed7b-4952-bd61-7993245a1986] Running
	I0813 21:04:28.320315   11447 system_pods.go:74] duration metric: took 19.546937ms to wait for pod list to return data ...
	I0813 21:04:28.320330   11447 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:04:28.329682   11447 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:04:28.329749   11447 node_conditions.go:123] node cpu capacity is 2
	I0813 21:04:28.329769   11447 node_conditions.go:105] duration metric: took 9.429948ms to run NodePressure ...
	I0813 21:04:28.329793   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:29.546168   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.216348804s)
	I0813 21:04:29.546210   11447 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:04:29.563341   11447 kubeadm.go:746] kubelet initialised
	I0813 21:04:29.563369   11447 kubeadm.go:747] duration metric: took 17.148102ms waiting for restarted kubelet to initialise ...
	I0813 21:04:29.563380   11447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:04:29.573196   11447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace to be "Ready" ...
	I0813 21:04:29.338170   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0: (5.138437758s)
	I0813 21:04:29.338201   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 from cache
	I0813 21:04:29.338230   11600 cache_images.go:113] Successfully loaded all cached images
	I0813 21:04:29.338242   11600 cache_images.go:82] LoadImages completed in 23.300623842s
	I0813 21:04:29.338374   11600 ssh_runner.go:149] Run: crio config
	I0813 21:04:29.638116   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:04:29.638137   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:04:29.638149   11600 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:04:29.638162   11600 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.107 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210813205915-30853 NodeName:no-preload-20210813205915-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.105.107 CgroupDriver:systemd Cl
ientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:04:29.638336   11600 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "no-preload-20210813205915-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:04:29.638444   11600 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=no-preload-20210813205915-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.105.107 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:04:29.638511   11600 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 21:04:29.651119   11600 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:04:29.651199   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:04:29.658178   11600 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (518 bytes)
	I0813 21:04:29.674188   11600 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 21:04:29.689809   11600 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2086 bytes)
	I0813 21:04:29.704568   11600 ssh_runner.go:149] Run: grep 192.168.105.107	control-plane.minikube.internal$ /etc/hosts
	I0813 21:04:29.709516   11600 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:29.722084   11600 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853 for IP: 192.168.105.107
	I0813 21:04:29.722165   11600 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:04:29.722197   11600 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:04:29.722281   11600 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.key
	I0813 21:04:29.722312   11600 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/apiserver.key.209a1939
	I0813 21:04:29.722343   11600 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/proxy-client.key
	I0813 21:04:29.722473   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 21:04:29.722561   11600 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 21:04:29.722580   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:04:29.722661   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:04:29.722712   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:04:29.722757   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 21:04:29.722866   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:04:29.724368   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:04:29.746769   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:04:29.768192   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:04:29.786871   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 21:04:29.806532   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:04:29.825599   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 21:04:29.847494   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:04:29.870257   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 21:04:29.892328   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:04:29.912923   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 21:04:29.931703   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 21:04:29.951536   11600 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:04:29.968398   11600 ssh_runner.go:149] Run: openssl version
	I0813 21:04:29.976170   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:04:29.984473   11600 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:29.989429   11600 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:29.989476   11600 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:29.995576   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:04:30.003420   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 21:04:30.011665   11600 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 21:04:30.017989   11600 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 21:04:30.018036   11600 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 21:04:30.025928   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 21:04:30.036305   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 21:04:30.046763   11600 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 21:04:30.052505   11600 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 21:04:30.052558   11600 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 21:04:30.059983   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:04:30.068353   11600 kubeadm.go:390] StartCluster: {Name:no-preload-20210813205915-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.
0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.107 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:04:30.068511   11600 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 21:04:30.068563   11600 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:30.103079   11600 cri.go:76] found id: ""
	I0813 21:04:30.103167   11600 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:04:30.112165   11600 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:04:30.112188   11600 kubeadm.go:600] restartCluster start
	I0813 21:04:30.112242   11600 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:04:30.120196   11600 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.121712   11600 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210813205915-30853" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:04:30.122350   11600 kubeconfig.go:128] "no-preload-20210813205915-30853" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:04:30.123522   11600 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:04:30.127714   11600 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:04:30.134966   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.135011   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.144537   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.344893   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.345009   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.354676   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.544891   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.544966   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.554560   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.744600   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.744692   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.756935   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.945184   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.945265   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.955263   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.145650   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.145758   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.157682   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.344971   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.345039   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.354648   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.544933   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.545001   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.554862   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.745107   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.745178   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.756702   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.945036   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.945134   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.956052   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.145356   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.145486   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.154892   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.013514   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:33.515372   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:32.667378   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:34.671027   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:31.606937   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:33.614157   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:32.344907   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.344989   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.354828   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.545178   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.545268   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.554771   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.745015   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.745132   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.754451   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.945134   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.945223   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.958046   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:33.145379   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:33.145471   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:33.156311   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:33.156338   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:33.156387   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:33.166450   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:33.166479   11600 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:04:33.166489   11600 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:04:33.166504   11600 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:04:33.166556   11600 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:33.201224   11600 cri.go:76] found id: ""
	I0813 21:04:33.201320   11600 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:04:33.218274   11600 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:04:33.226895   11600 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:04:33.226953   11600 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:33.233603   11600 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:33.233633   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:33.409004   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.227200   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.522150   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.670047   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.781290   11600 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:04:34.781393   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:35.294318   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:35.794319   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:36.294093   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:36.794810   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:35.517996   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:38.013307   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:37.169398   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:39.667640   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:36.109861   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:38.110944   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:40.608444   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:37.294229   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:37.794174   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:38.294380   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:38.795081   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:39.295011   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:39.794912   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:40.294691   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:40.794676   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:41.294339   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:41.794517   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:40.514739   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:42.515815   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:41.674615   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:44.171008   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:43.111611   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:45.608557   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:42.294762   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:42.794735   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:43.294817   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:43.794556   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:43.818714   11600 api_server.go:70] duration metric: took 9.037423183s to wait for apiserver process to appear ...
	I0813 21:04:43.818749   11600 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:04:43.818763   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:43.819314   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": dial tcp 192.168.105.107:8443: connect: connection refused
	I0813 21:04:44.319959   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:45.012244   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:47.016481   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:46.672075   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:49.172907   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:47.615450   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:50.112038   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:49.320842   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:04:49.820028   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:49.514363   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:52.012464   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:51.669686   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:53.793699   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:52.607875   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:54.608704   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:54.821107   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:04:55.319665   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:54.013451   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:56.512870   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:58.517483   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:56.168752   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:58.169636   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:57.108818   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:59.110668   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:00.319940   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:05:00.819508   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:01.018546   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:03.515645   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:00.668977   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:02.670402   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:05.170956   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:01.618304   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:04.109034   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:05.157882   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:05:05.158001   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:05:05.320212   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:05.504416   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:05:05.504471   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:05:05.819967   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:05.864291   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:05:05.864338   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:05:06.319440   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:06.332338   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:05:06.332364   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:05:06.820046   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:06.827164   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 200:
	ok
	I0813 21:05:06.836155   11600 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:05:06.836176   11600 api_server.go:129] duration metric: took 23.017420085s to wait for apiserver health ...
	I0813 21:05:06.836188   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:05:06.836198   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:05:06.838586   11600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:05:06.838684   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:05:06.847037   11600 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:05:06.865264   11600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:05:06.893537   11600 system_pods.go:59] 8 kube-system pods found
	I0813 21:05:06.893572   11600 system_pods.go:61] "coredns-78fcd69978-wqktx" [84e2ed0e-2c5a-4dcc-a8de-2cee9f92d267] Running
	I0813 21:05:06.893578   11600 system_pods.go:61] "etcd-no-preload-20210813205915-30853" [de55bcf6-20c8-4b4a-81e0-b181cca0e618] Running
	I0813 21:05:06.893582   11600 system_pods.go:61] "kube-apiserver-no-preload-20210813205915-30853" [53002765-155d-4f17-b484-2fe4e088255d] Running
	I0813 21:05:06.893587   11600 system_pods.go:61] "kube-controller-manager-no-preload-20210813205915-30853" [6052be3c-51df-4a5c-b8a1-6a5a64b4d241] Running
	I0813 21:05:06.893594   11600 system_pods.go:61] "kube-proxy-vvkkd" [c6eef664-f71d-4d0f-aec7-8942b5977520] Running
	I0813 21:05:06.893599   11600 system_pods.go:61] "kube-scheduler-no-preload-20210813205915-30853" [24d521ca-7b13-4b06-805d-7b568471cffb] Running
	I0813 21:05:06.893615   11600 system_pods.go:61] "metrics-server-7c784ccb57-rfp5v" [8c3b111e-0b1d-4a36-85ab-49fe495a538e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:05:06.893629   11600 system_pods.go:61] "storage-provisioner" [dfb23af4-15d2-420e-8720-c4fee1cf94f8] Running
	I0813 21:05:06.893637   11600 system_pods.go:74] duration metric: took 28.354614ms to wait for pod list to return data ...
	I0813 21:05:06.893648   11600 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:05:06.916270   11600 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:05:06.916300   11600 node_conditions.go:123] node cpu capacity is 2
	I0813 21:05:06.916316   11600 node_conditions.go:105] duration metric: took 22.662818ms to run NodePressure ...
	I0813 21:05:06.916337   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:05.516343   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.517331   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.670058   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:09.675888   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:06.111044   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.608567   11447 pod_ready.go:92] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.608606   11447 pod_ready.go:81] duration metric: took 38.035378096s waiting for pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.608620   11447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.615404   11447 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.615428   11447 pod_ready.go:81] duration metric: took 6.797829ms waiting for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.615442   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.630269   11447 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.630291   11447 pod_ready.go:81] duration metric: took 14.84004ms waiting for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.630301   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.637173   11447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.637191   11447 pod_ready.go:81] duration metric: took 6.881994ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.637205   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-99cxm" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.641787   11447 pod_ready.go:92] pod "kube-proxy-99cxm" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.641806   11447 pod_ready.go:81] duration metric: took 4.592412ms waiting for pod "kube-proxy-99cxm" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.641816   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:08.006732   11447 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:08.006761   11447 pod_ready.go:81] duration metric: took 364.934714ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:08.006777   11447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:10.416206   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.404648   11600 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:05:07.414912   11600 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0813 21:05:07.708787   11600 retry.go:31] will retry after 540.190908ms: kubelet not initialised
	I0813 21:05:08.256390   11600 kubeadm.go:746] kubelet initialised
	I0813 21:05:08.256419   11600 kubeadm.go:747] duration metric: took 851.738381ms waiting for restarted kubelet to initialise ...
	I0813 21:05:08.256432   11600 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:05:08.265413   11600 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-wqktx" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:10.372610   11600 pod_ready.go:102] pod "coredns-78fcd69978-wqktx" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:10.016406   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.513411   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.171097   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:14.667560   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.416520   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:14.917152   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.791126   11600 pod_ready.go:102] pod "coredns-78fcd69978-wqktx" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:15.296951   11600 pod_ready.go:92] pod "coredns-78fcd69978-wqktx" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:15.296981   11600 pod_ready.go:81] duration metric: took 7.031537534s waiting for pod "coredns-78fcd69978-wqktx" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:15.296992   11600 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:14.513966   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:16.518250   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:16.669467   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:18.670323   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:16.956540   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:19.413311   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:17.316436   11600 pod_ready.go:102] pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:17.817195   11600 pod_ready.go:92] pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:17.817242   11600 pod_ready.go:81] duration metric: took 2.520242337s waiting for pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:17.817255   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:17.825965   11600 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:17.825988   11600 pod_ready.go:81] duration metric: took 8.722511ms waiting for pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:17.826001   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:19.873713   11600 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:19.011904   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:21.016678   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:23.516661   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:21.171346   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:23.667746   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:21.422135   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:23.915750   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:22.369972   11600 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:22.370008   11600 pod_ready.go:81] duration metric: took 4.543995238s waiting for pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.370023   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vvkkd" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.377665   11600 pod_ready.go:92] pod "kube-proxy-vvkkd" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:22.377685   11600 pod_ready.go:81] duration metric: took 7.65301ms waiting for pod "kube-proxy-vvkkd" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.377696   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.385096   11600 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:22.385113   11600 pod_ready.go:81] duration metric: took 7.408599ms waiting for pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.385121   11600 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:24.402382   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:26.901061   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:26.018949   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.513145   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:25.668326   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.186367   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:26.415525   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.913863   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.902947   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.903048   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.516874   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:33.011959   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.666530   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:32.666799   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:34.668707   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.915376   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:33.415440   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:35.415962   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:33.403872   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:35.902644   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:35.014820   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:37.015893   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:37.169496   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:39.170551   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:37.918334   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:40.414297   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:38.408969   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:40.903397   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:39.017723   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:41.512620   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:43.513209   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:41.171007   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:43.668192   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:42.915720   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.423660   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:43.403450   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.445034   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.515122   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:48.013001   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.669651   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:48.167953   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:50.171552   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:47.916795   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:49.916975   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:47.904497   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:50.399990   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:50.512153   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.512918   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.174821   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.670257   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.414652   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.415677   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.404181   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.904430   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.515153   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:57.013806   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:57.168792   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:59.666912   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:56.416201   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:58.917986   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:57.401016   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:59.404016   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.906289   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:59.512815   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.514140   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.668491   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:03.668678   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.413828   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:03.414479   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:04.403957   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:06.901856   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:04.012166   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:06.013309   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.512931   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:06.168995   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.667450   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:05.918408   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.416404   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:10.416808   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.903609   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:11.405857   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:11.014642   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:13.512706   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:10.669910   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:13.170072   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:12.919893   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:15.417469   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:13.901800   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:16.402802   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:15.514827   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:18.012928   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:15.668033   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:17.668913   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.167322   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:17.914829   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.413984   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:18.405532   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.902412   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.017907   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.514292   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.170177   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:24.668943   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.416213   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:24.922905   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.902968   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:25.401882   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:25.067645   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.519637   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.167658   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:29.168133   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.413791   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:29.414145   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.402765   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:29.403392   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:31.900702   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:30.012069   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:32.014177   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:31.169296   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.160326   10272 pod_ready.go:81] duration metric: took 4m0.399801158s waiting for pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace to be "Ready" ...
	E0813 21:06:33.160356   10272 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:06:33.160383   10272 pod_ready.go:38] duration metric: took 4m1.6003819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:06:33.160416   10272 kubeadm.go:604] restartCluster took 4m59.137608004s
	W0813 21:06:33.160600   10272 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:06:33.160640   10272 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:06:31.419127   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.918800   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.903797   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:36.401884   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:34.015031   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:36.513631   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:36.414485   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:38.415451   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:40.416420   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:38.900640   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:40.901483   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:39.011809   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:41.013908   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:43.513605   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:42.920201   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:45.415258   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:42.905257   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:44.905610   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:45.514466   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:47.515852   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:47.415484   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:49.415708   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:47.414520   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:49.903972   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:49.517251   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:52.012858   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:51.918221   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:53.918831   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:52.402393   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:54.902136   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:54.513409   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:57.012531   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:00.392100   10272 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.231434099s)
	I0813 21:07:00.392193   10272 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:07:00.406886   10272 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:07:00.406959   10272 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:07:00.442137   10272 cri.go:76] found id: ""
	I0813 21:07:00.442208   10272 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:07:00.449499   10272 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:07:00.458330   10272 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:07:00.458372   10272 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0813 21:06:55.923186   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:58.413947   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:00.414960   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:57.401732   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:59.404622   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:01.901431   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:01.146030   10272 out.go:204]   - Generating certificates and keys ...
	I0813 21:06:59.013910   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:01.514845   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:02.514874   10272 out.go:204]   - Booting up control plane ...
	I0813 21:07:02.420421   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:04.921161   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:03.901922   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:06.400821   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:04.017697   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:06.512767   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:07.415160   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:09.916408   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:08.402752   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:10.903350   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:09.011421   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:11.015678   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:13.515855   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:14.594414   10272 out.go:204]   - Configuring RBAC rules ...
	I0813 21:07:15.029321   10272 cni.go:93] Creating CNI manager for ""
	I0813 21:07:15.029346   10272 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:07:15.031000   10272 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:07:15.031061   10272 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:07:15.039108   10272 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:07:15.058649   10272 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:07:15.058707   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:15.058717   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=old-k8s-version-20210813205823-30853 minikube.k8s.io/updated_at=2021_08_13T21_07_15_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:15.095343   10272 ops.go:34] apiserver oom_adj: 16
	I0813 21:07:15.095372   10272 ops.go:39] adjusting apiserver oom_adj to -10
	I0813 21:07:15.095386   10272 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:07:15.330590   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:12.413115   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:14.414512   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:13.400030   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:15.403757   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:15.505147   10867 pod_ready.go:81] duration metric: took 4m0.402080118s waiting for pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace to be "Ready" ...
	E0813 21:07:15.505169   10867 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:07:15.505190   10867 pod_ready.go:38] duration metric: took 4m39.330917946s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:07:15.505243   10867 kubeadm.go:604] restartCluster took 5m2.104930788s
	W0813 21:07:15.505419   10867 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:07:15.505453   10867 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:07:15.931748   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:16.430811   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:16.930834   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:17.430845   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:17.930776   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:18.431732   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:18.930812   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:19.431647   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:19.931099   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:20.431444   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:16.414885   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:18.422404   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:17.901988   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:20.403379   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:20.930893   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:21.430961   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:21.931774   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:22.431310   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:22.931068   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:23.431314   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:23.931570   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:24.431290   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:24.931320   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:25.431531   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:20.914560   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:22.914642   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:24.916586   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:22.902451   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:24.903333   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:25.931646   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:26.431685   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:26.931719   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:27.431409   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:27.930888   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:28.431524   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:28.931535   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:29.431073   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:29.931502   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:30.430962   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:26.919653   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:29.418420   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:30.543916   10272 kubeadm.go:985] duration metric: took 15.48526077s to wait for elevateKubeSystemPrivileges.
	I0813 21:07:30.543949   10272 kubeadm.go:392] StartCluster complete in 5m56.564780701s
	I0813 21:07:30.543981   10272 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:07:30.544141   10272 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:07:30.545813   10272 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:07:31.081760   10272 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210813205823-30853" rescaled to 1
	I0813 21:07:31.081820   10272 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.83.49 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0813 21:07:31.083916   10272 out.go:177] * Verifying Kubernetes components...
	I0813 21:07:31.083983   10272 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:07:31.081886   10272 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:07:31.081888   10272 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:07:31.084080   10272 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084084   10272 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084099   10272 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.082132   10272 config.go:177] Loaded profile config "old-k8s-version-20210813205823-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	W0813 21:07:31.084108   10272 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:07:31.084120   10272 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084134   10272 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084143   10272 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084151   10272 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210813205823-30853"
	W0813 21:07:31.084158   10272 addons.go:147] addon metrics-server should already be in state true
	I0813 21:07:31.084100   10272 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210813205823-30853"
	W0813 21:07:31.084168   10272 addons.go:147] addon dashboard should already be in state true
	I0813 21:07:31.084183   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.084189   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.084158   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.084631   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084632   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084685   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.084687   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.084751   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084792   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.084631   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084865   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.105064   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42647
	I0813 21:07:31.105078   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35401
	I0813 21:07:31.105589   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.105724   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.105733   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0813 21:07:31.105826   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0813 21:07:31.106201   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.106225   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.106288   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.106388   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.106410   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.106656   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.106795   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.106823   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.106845   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.106940   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.107274   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.107310   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.107372   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.107393   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.107505   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.107679   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.107914   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.108023   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.108066   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.108456   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.108502   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.121147   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0813 21:07:31.120919   10272 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210813205823-30853"
	W0813 21:07:31.121411   10272 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:07:31.121457   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.121491   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45327
	I0813 21:07:31.121993   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.122297   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.122764   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.123195   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.123739   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.123763   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.123790   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.123822   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.124154   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.124287   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.124315   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.124496   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.128429   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:31.130930   10272 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:07:31.129602   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:31.130875   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45195
	I0813 21:07:31.132382   10272 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:07:31.132436   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:07:31.132451   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:07:31.132474   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.134119   10272 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:07:31.134224   10272 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:07:31.134241   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:07:31.134259   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.132855   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.135094   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.135114   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.135252   10272 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210813205823-30853" to be "Ready" ...
	I0813 21:07:31.135886   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.136518   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.140126   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.140398   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:27.404366   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:29.901079   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:31.902091   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:31.142209   10272 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:07:31.142270   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:07:31.140792   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.142282   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:07:31.140956   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.142313   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.142015   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.142337   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.142480   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.142494   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.142517   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.142738   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.142977   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.143006   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.143155   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.143333   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.143530   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.143544   10272 node_ready.go:49] node "old-k8s-version-20210813205823-30853" has status "Ready":"True"
	I0813 21:07:31.143557   10272 node_ready.go:38] duration metric: took 8.284522ms waiting for node "old-k8s-version-20210813205823-30853" to be "Ready" ...
	I0813 21:07:31.143568   10272 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:07:31.145891   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36815
	I0813 21:07:31.146234   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.146769   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.146792   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.147190   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.147843   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.147892   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.148364   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.148819   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.148848   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.148994   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.149157   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.149288   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.149464   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.154492   10272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:31.159199   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0813 21:07:31.159608   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.160083   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.160107   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.160442   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.160628   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.163581   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:31.163764   10272 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:07:31.163780   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:07:31.163796   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.169112   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.169507   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.169535   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.169656   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.169820   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.170004   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.170153   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.334616   10272 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:07:31.339091   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:07:31.350144   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:07:31.350160   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:07:31.366866   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:07:31.366889   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:07:31.415434   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:07:31.415460   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:07:31.415813   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:07:31.439763   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:07:31.439787   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:07:31.551531   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:07:31.551559   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:07:31.614721   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:07:31.614757   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:07:31.648730   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:07:31.686266   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:07:31.686288   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:07:31.766323   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:07:31.766354   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:07:32.021208   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:07:32.021232   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:07:32.128868   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:07:32.128914   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:07:32.396755   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:07:32.396784   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:07:32.629623   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:07:32.629647   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:07:32.876963   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:07:33.170819   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.554610   10272 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.219955078s)
	I0813 21:07:33.554661   10272 start.go:728] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS
	I0813 21:07:33.554710   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.215586915s)
	I0813 21:07:33.554766   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.138920482s)
	I0813 21:07:33.554845   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.554810   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.554909   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.554882   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.555205   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.555224   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.555237   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.555251   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.555322   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.555339   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.555337   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.555352   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.555362   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.557880   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.557881   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.557894   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.557900   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.557931   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.557951   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.557969   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.558002   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.558255   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.558287   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.558297   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.417993   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.769219397s)
	I0813 21:07:34.418041   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.418055   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.419702   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.419703   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:34.419721   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.419735   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.419744   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.420013   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.420030   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.420042   10272 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210813205823-30853"
	I0813 21:07:34.719323   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.842300346s)
	I0813 21:07:34.719378   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.719393   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.719692   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.719710   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.719720   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.719731   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.721171   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.721190   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.721177   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:34.723692   10272 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 21:07:34.723719   10272 addons.go:344] enableAddons completed in 3.64184317s
	I0813 21:07:31.421963   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.916790   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.903029   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:36.402184   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:35.688121   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.171925   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:36.422423   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.916463   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.403153   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.903100   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.668346   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:42.668696   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:44.669555   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.922382   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:42.982831   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:45.413525   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:43.402566   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:45.905536   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:46.733235   10867 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.227754709s)
	I0813 21:07:46.733320   10867 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:07:46.749380   10867 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:07:46.749451   10867 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:07:46.789090   10867 cri.go:76] found id: ""
	I0813 21:07:46.789192   10867 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:07:46.797753   10867 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:07:46.805773   10867 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:07:46.805816   10867 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:07:47.366092   10867 out.go:204]   - Generating certificates and keys ...
	I0813 21:07:48.287070   10867 out.go:204]   - Booting up control plane ...
	I0813 21:07:46.669635   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:49.169303   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:47.414190   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:49.914581   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:48.403863   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:50.902452   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:51.170024   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:53.672034   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:52.419570   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:54.922828   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:53.400843   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:55.401813   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:56.169442   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:58.173990   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:00.180299   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:57.414460   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:59.414953   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:57.402188   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:59.407382   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:01.902586   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:02.672361   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:05.168918   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:04.917732   10867 out.go:204]   - Configuring RBAC rules ...
	I0813 21:08:05.478215   10867 cni.go:93] Creating CNI manager for ""
	I0813 21:08:05.478240   10867 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:08:01.415978   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:03.916377   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:03.903277   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:05.908821   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:05.480079   10867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:08:05.480166   10867 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:08:05.490836   10867 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:08:05.516775   10867 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:08:05.516826   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=embed-certs-20210813205917-30853 minikube.k8s.io/updated_at=2021_08_13T21_08_05_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:05.516826   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:05.571274   10867 ops.go:34] apiserver oom_adj: -16
	I0813 21:08:05.877007   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:06.498456   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:06.997686   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:07.498266   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:07.998377   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:08.498124   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:07.171495   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.171976   10272 pod_ready.go:92] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:08.172005   10272 pod_ready.go:81] duration metric: took 37.017483324s waiting for pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:08.172023   10272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xnqfc" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:08.178546   10272 pod_ready.go:92] pod "kube-proxy-xnqfc" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:08.178572   10272 pod_ready.go:81] duration metric: took 6.540181ms waiting for pod "kube-proxy-xnqfc" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:08.178582   10272 pod_ready.go:38] duration metric: took 37.035002251s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:08:08.178607   10272 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:08:08.178659   10272 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:08:08.193211   10272 api_server.go:70] duration metric: took 37.111356956s to wait for apiserver process to appear ...
	I0813 21:08:08.193234   10272 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:08:08.193245   10272 api_server.go:239] Checking apiserver healthz at https://192.168.83.49:8443/healthz ...
	I0813 21:08:08.200770   10272 api_server.go:265] https://192.168.83.49:8443/healthz returned 200:
	ok
	I0813 21:08:08.201945   10272 api_server.go:139] control plane version: v1.14.0
	I0813 21:08:08.201960   10272 api_server.go:129] duration metric: took 8.721341ms to wait for apiserver health ...
	I0813 21:08:08.201968   10272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:08:08.206023   10272 system_pods.go:59] 4 kube-system pods found
	I0813 21:08:08.206043   10272 system_pods.go:61] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.206047   10272 system_pods.go:61] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.206054   10272 system_pods.go:61] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.206058   10272 system_pods.go:61] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.206065   10272 system_pods.go:74] duration metric: took 4.091873ms to wait for pod list to return data ...
	I0813 21:08:08.206072   10272 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:08:08.209997   10272 default_sa.go:45] found service account: "default"
	I0813 21:08:08.210015   10272 default_sa.go:55] duration metric: took 3.938001ms for default service account to be created ...
	I0813 21:08:08.210022   10272 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:08:08.214317   10272 system_pods.go:86] 4 kube-system pods found
	I0813 21:08:08.214336   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.214341   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.214348   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.214354   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.214373   10272 retry.go:31] will retry after 214.282984ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:08.433733   10272 system_pods.go:86] 4 kube-system pods found
	I0813 21:08:08.433762   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.433770   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.433781   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.433788   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.433807   10272 retry.go:31] will retry after 293.852686ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:08.735301   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:08.735333   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.735341   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.735350   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:08.735360   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.735366   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.735412   10272 retry.go:31] will retry after 355.089487ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:09.097711   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:09.097745   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.097753   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.097758   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:09.097765   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:09.097770   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.097788   10272 retry.go:31] will retry after 480.685997ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:09.584281   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:09.584311   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.584317   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.584321   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:09.584329   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:09.584333   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.584352   10272 retry.go:31] will retry after 544.138839ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:10.134667   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:10.134694   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.134701   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.134706   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:10.134712   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:10.134716   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.134738   10272 retry.go:31] will retry after 684.014074ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:05.922361   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.419726   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.401315   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:10.909126   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.998041   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:09.498515   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:09.998297   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:10.498018   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:10.997716   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:11.497679   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:11.998238   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:12.498701   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:12.997887   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:13.498358   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:10.825951   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:10.825981   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.825987   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:10.825991   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.825995   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:10.826001   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:10.826006   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.826027   10272 retry.go:31] will retry after 1.039068543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:11.871229   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:11.871263   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:11.871270   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:11.871274   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:11.871279   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:11.871292   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:11.871300   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:11.871321   10272 retry.go:31] will retry after 1.02433744s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:12.905014   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:12.905044   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905052   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905058   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905065   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905075   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:12.905081   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905105   10272 retry.go:31] will retry after 1.268973106s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:14.179146   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:14.179173   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179179   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179183   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179188   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179195   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:14.179202   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179223   10272 retry.go:31] will retry after 1.733071555s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:10.914496   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:12.924919   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:15.415784   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:13.401246   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:15.408120   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:13.997632   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:14.497943   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:14.998249   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:15.498543   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:15.998283   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:16.497729   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:16.997873   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:17.497972   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:17.997958   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:18.497761   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:18.997883   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:19.220539   10867 kubeadm.go:985] duration metric: took 13.703767036s to wait for elevateKubeSystemPrivileges.
	I0813 21:08:19.220607   10867 kubeadm.go:392] StartCluster complete in 6m5.865041156s
	I0813 21:08:19.220635   10867 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:08:19.220787   10867 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:08:19.223909   10867 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:08:19.752954   10867 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210813205917-30853" rescaled to 1
	I0813 21:08:19.753018   10867 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:08:19.754708   10867 out.go:177] * Verifying Kubernetes components...
	I0813 21:08:19.754778   10867 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:08:19.753082   10867 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:08:19.753107   10867 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:08:19.753299   10867 config.go:177] Loaded profile config "embed-certs-20210813205917-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:08:19.754891   10867 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754904   10867 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754933   10867 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210813205917-30853"
	I0813 21:08:19.754932   10867 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754940   10867 addons.go:59] Setting dashboard=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754970   10867 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210813205917-30853"
	I0813 21:08:19.754974   10867 addons.go:135] Setting addon dashboard=true in "embed-certs-20210813205917-30853"
	W0813 21:08:19.754988   10867 addons.go:147] addon dashboard should already be in state true
	W0813 21:08:19.754987   10867 addons.go:147] addon metrics-server should already be in state true
	I0813 21:08:19.755026   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.754914   10867 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210813205917-30853"
	W0813 21:08:19.755116   10867 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:08:19.755134   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.755026   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.755462   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755511   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.755539   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755462   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755571   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.755606   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.755637   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755686   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.770580   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42725
	I0813 21:08:19.771121   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.771377   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33335
	I0813 21:08:19.771830   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.771853   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.771954   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.772247   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.772723   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.772739   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.772901   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0813 21:08:19.773026   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.773068   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.773413   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.773902   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.773924   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.774397   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.774463   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.774563   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.775023   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.775063   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.784550   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33743
	I0813 21:08:19.784959   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.785506   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.785522   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.785894   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.786493   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.786525   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.787205   10867 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210813205917-30853"
	W0813 21:08:19.787228   10867 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:08:19.787259   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.787583   10867 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210813205917-30853" to be "Ready" ...
	I0813 21:08:19.787674   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.787718   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.787787   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0813 21:08:19.787910   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0813 21:08:19.788204   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.789084   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.789106   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.789211   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.789825   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.789931   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.789953   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.790005   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.790276   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.790437   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.794978   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.794986   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.797284   10867 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:08:19.798757   10867 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:08:19.797345   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:08:19.798798   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:08:19.798822   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.800334   10867 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:08:19.800389   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:08:19.800399   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:08:19.799838   10867 node_ready.go:49] node "embed-certs-20210813205917-30853" has status "Ready":"True"
	I0813 21:08:19.800420   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.800422   10867 node_ready.go:38] duration metric: took 12.815275ms waiting for node "embed-certs-20210813205917-30853" to be "Ready" ...
	I0813 21:08:19.800442   10867 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:08:19.802028   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35565
	I0813 21:08:19.802460   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.802983   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.803025   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.803483   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.803731   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.809104   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.809531   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.809654   10867 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:15.917751   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:15.917783   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917792   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917799   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917805   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917816   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:15.917823   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917844   10272 retry.go:31] will retry after 2.410580953s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:18.337846   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:18.337883   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337892   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337898   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337905   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337916   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:18.337923   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337944   10272 retry.go:31] will retry after 3.437877504s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:17.916739   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:20.415225   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:17.901469   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:19.902763   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:21.903648   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:19.811430   10867 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:08:19.810007   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.811541   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.811578   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.811581   10867 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:08:19.810168   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.810293   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.810559   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.811047   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36555
	I0813 21:08:19.811649   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.811674   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:08:19.811689   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.811908   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.811910   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.812038   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.812038   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.812443   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.812464   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:19.812475   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:19.813065   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.813083   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.813470   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.814035   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.814070   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.818289   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.818751   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.818811   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.818838   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.818903   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.819054   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.819209   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:19.825837   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0813 21:08:19.826199   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.826605   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.826624   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.826952   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.827127   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.830318   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.830538   10867 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:08:19.830553   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:08:19.830570   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.835761   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.836143   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.836172   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.836286   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.836451   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.836602   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.836724   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:20.037292   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:08:20.037321   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:08:20.099263   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:08:20.099292   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:08:20.117736   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:08:20.146467   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:08:20.146494   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:08:20.148636   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:08:20.180430   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:08:20.180464   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:08:20.300161   10867 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:08:20.301107   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:08:20.301131   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:08:20.311540   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:08:20.311565   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:08:20.390587   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:08:20.390623   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:08:20.411556   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:08:20.513347   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:08:20.513381   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:08:20.562665   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:08:20.562692   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:08:20.637151   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:08:20.637186   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:08:20.697238   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:08:20.697266   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:08:20.722593   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:08:20.722622   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:08:20.888939   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:08:21.832691   10867 pod_ready.go:102] pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:22.499631   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.381850453s)
	I0813 21:08:22.499694   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.499708   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.499992   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.500011   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.500021   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.500031   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.500251   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.500299   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.500317   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.500327   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.500578   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.500587   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | Closing plugin on server side
	I0813 21:08:22.500601   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.607350   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.458674806s)
	I0813 21:08:22.607409   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.607423   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.607684   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.607702   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.607713   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.607728   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.607970   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.607987   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.671948   10867 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.371722218s)
	I0813 21:08:22.671991   10867 start.go:728] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 21:08:23.212733   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.801121223s)
	I0813 21:08:23.212785   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:23.212801   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:23.213078   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | Closing plugin on server side
	I0813 21:08:23.213122   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:23.213131   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:23.213147   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:23.213162   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:23.213417   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | Closing plugin on server side
	I0813 21:08:23.213454   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:23.213463   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:23.213476   10867 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210813205917-30853"
	I0813 21:08:23.973313   10867 pod_ready.go:102] pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.127694   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.238655669s)
	I0813 21:08:24.127768   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:24.127783   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:24.128088   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:24.128134   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:24.128152   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:24.128162   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:24.128402   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:24.128416   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:21.783186   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:21.783216   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783222   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783226   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783231   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783238   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:21.783242   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783260   10272 retry.go:31] will retry after 3.261655801s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:25.051995   10272 system_pods.go:86] 7 kube-system pods found
	I0813 21:08:25.052028   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052037   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052051   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:25.052058   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052065   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052076   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:25.052086   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052104   10272 retry.go:31] will retry after 4.086092664s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:22.421981   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.915565   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:23.903699   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:25.903987   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.130282   10867 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 21:08:24.130308   10867 addons.go:344] enableAddons completed in 4.377209962s
	I0813 21:08:26.342246   10867 pod_ready.go:92] pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:26.342272   10867 pod_ready.go:81] duration metric: took 6.532595189s waiting for pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:26.342282   10867 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:28.367486   10867 pod_ready.go:102] pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:29.149965   10272 system_pods.go:86] 7 kube-system pods found
	I0813 21:08:29.149997   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150006   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150013   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:29.150019   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150025   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150035   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:29.150043   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150063   10272 retry.go:31] will retry after 6.402197611s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:26.928284   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:29.416662   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:28.403505   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:30.906239   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:30.367630   10867 pod_ready.go:102] pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:31.386002   10867 pod_ready.go:97] error getting pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-zdlnb" not found
	I0813 21:08:31.386040   10867 pod_ready.go:81] duration metric: took 5.043748322s waiting for pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace to be "Ready" ...
	E0813 21:08:31.386053   10867 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-zdlnb" not found
	I0813 21:08:31.386063   10867 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.395413   10867 pod_ready.go:92] pod "etcd-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.395442   10867 pod_ready.go:81] duration metric: took 9.37037ms waiting for pod "etcd-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.395456   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.407839   10867 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.407860   10867 pod_ready.go:81] duration metric: took 12.39509ms waiting for pod "kube-apiserver-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.407872   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.413811   10867 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.413832   10867 pod_ready.go:81] duration metric: took 5.950273ms waiting for pod "kube-controller-manager-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.413845   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-szvqm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.422794   10867 pod_ready.go:92] pod "kube-proxy-szvqm" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.422819   10867 pod_ready.go:81] duration metric: took 8.966458ms waiting for pod "kube-proxy-szvqm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.422831   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.564060   10867 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.564136   10867 pod_ready.go:81] duration metric: took 141.29321ms waiting for pod "kube-scheduler-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.564168   10867 pod_ready.go:38] duration metric: took 11.763707327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:08:31.564208   10867 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:08:31.564290   10867 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:08:31.578890   10867 api_server.go:70] duration metric: took 11.8258395s to wait for apiserver process to appear ...
	I0813 21:08:31.578919   10867 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:08:31.578932   10867 api_server.go:239] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0813 21:08:31.585647   10867 api_server.go:265] https://192.168.39.156:8443/healthz returned 200:
	ok
	I0813 21:08:31.586833   10867 api_server.go:139] control plane version: v1.21.3
	I0813 21:08:31.586868   10867 api_server.go:129] duration metric: took 7.925906ms to wait for apiserver health ...
	I0813 21:08:31.586879   10867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:08:31.766375   10867 system_pods.go:59] 8 kube-system pods found
	I0813 21:08:31.766406   10867 system_pods.go:61] "coredns-558bd4d5db-8bmrm" [23a5740e-bd96-4bd0-851e-4abc81b7ddff] Running
	I0813 21:08:31.766412   10867 system_pods.go:61] "etcd-embed-certs-20210813205917-30853" [7061779a-83ef-4ed4-9512-ec936a2d94d1] Running
	I0813 21:08:31.766416   10867 system_pods.go:61] "kube-apiserver-embed-certs-20210813205917-30853" [796645fb-0142-415b-96c2-9b640f680514] Running
	I0813 21:08:31.766421   10867 system_pods.go:61] "kube-controller-manager-embed-certs-20210813205917-30853" [d17159ee-4ac6-4f2a-aaad-cd3af7317e02] Running
	I0813 21:08:31.766424   10867 system_pods.go:61] "kube-proxy-szvqm" [d116fa9a-0229-40cf-ae60-5d89fb7716f1] Running
	I0813 21:08:31.766428   10867 system_pods.go:61] "kube-scheduler-embed-certs-20210813205917-30853" [b888e2ad-9504-4e54-8156-8d30bb432d37] Running
	I0813 21:08:31.766436   10867 system_pods.go:61] "metrics-server-7c784ccb57-qc7sb" [43aa1ab2-5284-4d76-b826-12fd50a0ba54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:31.766440   10867 system_pods.go:61] "storage-provisioner" [f70d6e8f-2aca-49ac-913a-73ddf71ae6ee] Running
	I0813 21:08:31.766447   10867 system_pods.go:74] duration metric: took 179.562479ms to wait for pod list to return data ...
	I0813 21:08:31.766456   10867 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:08:31.964873   10867 default_sa.go:45] found service account: "default"
	I0813 21:08:31.964899   10867 default_sa.go:55] duration metric: took 198.43488ms for default service account to be created ...
	I0813 21:08:31.964911   10867 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:08:32.168305   10867 system_pods.go:86] 8 kube-system pods found
	I0813 21:08:32.168349   10867 system_pods.go:89] "coredns-558bd4d5db-8bmrm" [23a5740e-bd96-4bd0-851e-4abc81b7ddff] Running
	I0813 21:08:32.168359   10867 system_pods.go:89] "etcd-embed-certs-20210813205917-30853" [7061779a-83ef-4ed4-9512-ec936a2d94d1] Running
	I0813 21:08:32.168369   10867 system_pods.go:89] "kube-apiserver-embed-certs-20210813205917-30853" [796645fb-0142-415b-96c2-9b640f680514] Running
	I0813 21:08:32.168377   10867 system_pods.go:89] "kube-controller-manager-embed-certs-20210813205917-30853" [d17159ee-4ac6-4f2a-aaad-cd3af7317e02] Running
	I0813 21:08:32.168384   10867 system_pods.go:89] "kube-proxy-szvqm" [d116fa9a-0229-40cf-ae60-5d89fb7716f1] Running
	I0813 21:08:32.168390   10867 system_pods.go:89] "kube-scheduler-embed-certs-20210813205917-30853" [b888e2ad-9504-4e54-8156-8d30bb432d37] Running
	I0813 21:08:32.168402   10867 system_pods.go:89] "metrics-server-7c784ccb57-qc7sb" [43aa1ab2-5284-4d76-b826-12fd50a0ba54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:32.168412   10867 system_pods.go:89] "storage-provisioner" [f70d6e8f-2aca-49ac-913a-73ddf71ae6ee] Running
	I0813 21:08:32.168423   10867 system_pods.go:126] duration metric: took 203.506299ms to wait for k8s-apps to be running ...
	I0813 21:08:32.168436   10867 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:08:32.168487   10867 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:08:32.183556   10867 system_svc.go:56] duration metric: took 15.110742ms WaitForService to wait for kubelet.
	I0813 21:08:32.183585   10867 kubeadm.go:547] duration metric: took 12.430541017s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:08:32.183611   10867 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:08:32.366938   10867 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:08:32.366970   10867 node_conditions.go:123] node cpu capacity is 2
	I0813 21:08:32.366989   10867 node_conditions.go:105] duration metric: took 183.372537ms to run NodePressure ...
	I0813 21:08:32.367004   10867 start.go:231] waiting for startup goroutines ...
	I0813 21:08:32.428402   10867 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 21:08:32.430754   10867 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210813205917-30853" cluster and "default" namespace by default
	I0813 21:08:31.925048   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:34.421689   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:33.402937   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:35.404185   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:35.559235   10272 system_pods.go:86] 7 kube-system pods found
	I0813 21:08:35.559264   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559272   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559278   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559284   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559289   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559299   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:35.559305   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559325   10272 retry.go:31] will retry after 6.062999549s: missing components: kube-controller-manager
	I0813 21:08:36.917628   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:39.412918   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:37.902004   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:40.400508   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:41.627792   10272 system_pods.go:86] 8 kube-system pods found
	I0813 21:08:41.627828   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627837   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627844   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627851   10272 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205823-30853" [9f80b2c3-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:41.627857   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627863   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627874   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:41.627882   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627906   10272 retry.go:31] will retry after 10.504197539s: missing components: kube-controller-manager
	I0813 21:08:41.415467   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:43.418679   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:45.419622   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 21:01:48 UTC, end at Fri 2021-08-13 21:08:46 UTC. --
	Aug 13 21:08:45 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:45.581348263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="go-grpc-middleware/chain.go:25" id=da970da1-b35e-44bd-9848-e93442811899 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.333103640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=adab7763-e200-4564-a11b-29fb8244cd1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.333188552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=adab7763-e200-4564-a11b-29fb8244cd1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.333466572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=adab7763-e200-4564-a11b-29fb8244cd1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.377521219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=33b9b50f-b928-476a-947b-256f53de1834 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.377676472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=33b9b50f-b928-476a-947b-256f53de1834 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.378068943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=33b9b50f-b928-476a-947b-256f53de1834 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.419512126Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=64a7bf68-7680-4244-82db-0b242af9f124 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.419655418Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=64a7bf68-7680-4244-82db-0b242af9f124 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.420798979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=64a7bf68-7680-4244-82db-0b242af9f124 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.467350127Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b73be3e4-5780-4ed1-9fec-3da5cb713935 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.467413312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b73be3e4-5780-4ed1-9fec-3da5cb713935 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.467598870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b73be3e4-5780-4ed1-9fec-3da5cb713935 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.506067871Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cf26a40b-25a5-4f31-9735-4df6e7ab8ae8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.506216268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cf26a40b-25a5-4f31-9735-4df6e7ab8ae8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.506526200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cf26a40b-25a5-4f31-9735-4df6e7ab8ae8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.548044991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cfc9318f-93b8-4317-803e-faa923e6277b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.548108952Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cfc9318f-93b8-4317-803e-faa923e6277b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.548345789Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cfc9318f-93b8-4317-803e-faa923e6277b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.591381704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f391d4cc-9f80-4931-88e0-2455c494d481 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.591517025Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f391d4cc-9f80-4931-88e0-2455c494d481 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.591706206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f391d4cc-9f80-4931-88e0-2455c494d481 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.635198026Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=12cb4f02-43bf-45b6-a435-eae8ee9729f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.635265152Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=12cb4f02-43bf-45b6-a435-eae8ee9729f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:46 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:46.635468058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=12cb4f02-43bf-45b6-a435-eae8ee9729f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID
	1233d640b6fe4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   11 seconds ago      Exited              dashboard-metrics-scraper   1                   b88a0e8366b20
	91ce32446dbb6       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   20 seconds ago      Running             kubernetes-dashboard        0                   33266a9854848
	1ba3b441b51a5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   21 seconds ago      Running             storage-provisioner         0                   f6a238f5e8f90
	2a1b9a1d4a2b6       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   24 seconds ago      Running             coredns                     0                   91973f54aaaf5
	cba4967a2c2ca       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   26 seconds ago      Running             kube-proxy                  0                   bbd93e1b95832
	d1827f5ba3f77       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   49 seconds ago      Running             kube-scheduler              0                   1cbb2a7cfe7c8
	8a1f73a982b2d       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   50 seconds ago      Running             etcd                        0                   e75ff20160ef7
	9387b11356ea0       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   50 seconds ago      Running             kube-controller-manager     0                   3f133993f51d8
	43af874e547f6       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   50 seconds ago      Running             kube-apiserver              0                   c650c93f5b421
	
	* 
	* ==> coredns [2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +4.559950] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.040924] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.084946] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1727 comm=systemd-network
	[  +0.826895] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +2.247774] vboxguest: loading out-of-tree module taints kernel.
	[  +0.005907] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug13 21:02] systemd-fstab-generator[2133]: Ignoring "noauto" for root device
	[  +0.183630] systemd-fstab-generator[2146]: Ignoring "noauto" for root device
	[  +0.260871] systemd-fstab-generator[2172]: Ignoring "noauto" for root device
	[  +6.767201] systemd-fstab-generator[2362]: Ignoring "noauto" for root device
	[ +17.663697] kauditd_printk_skb: 38 callbacks suppressed
	[ +13.643891] kauditd_printk_skb: 107 callbacks suppressed
	[Aug13 21:03] kauditd_printk_skb: 2 callbacks suppressed
	[ +37.908192] NFSD: Unable to end grace period: -110
	[Aug13 21:07] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.326104] kauditd_printk_skb: 14 callbacks suppressed
	[ +14.799084] systemd-fstab-generator[6047]: Ignoring "noauto" for root device
	[Aug13 21:08] systemd-fstab-generator[6428]: Ignoring "noauto" for root device
	[ +15.462471] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.808060] kauditd_printk_skb: 68 callbacks suppressed
	[  +8.969890] kauditd_printk_skb: 8 callbacks suppressed
	[  +8.090698] systemd-fstab-generator[7892]: Ignoring "noauto" for root device
	[  +0.825336] systemd-fstab-generator[7946]: Ignoring "noauto" for root device
	[  +1.031372] systemd-fstab-generator[8000]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a] <==
	* 2021-08-13 21:07:56.806650 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-13 21:07:56.817028 I | etcdserver: 45ea9d8f303c08fa as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/13 21:07:56 INFO: 45ea9d8f303c08fa switched to configuration voters=(5038012371482446074)
	2021-08-13 21:07:56.822242 I | etcdserver/membership: added member 45ea9d8f303c08fa [https://192.168.39.156:2380] to cluster d1f5bcbb1e4f2572
	2021-08-13 21:07:56.825435 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 21:07:56.825666 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 21:07:56.825808 I | embed: listening for peers on 192.168.39.156:2380
	raft2021/08/13 21:07:56 INFO: 45ea9d8f303c08fa is starting a new election at term 1
	raft2021/08/13 21:07:56 INFO: 45ea9d8f303c08fa became candidate at term 2
	raft2021/08/13 21:07:56 INFO: 45ea9d8f303c08fa received MsgVoteResp from 45ea9d8f303c08fa at term 2
	raft2021/08/13 21:07:56 INFO: 45ea9d8f303c08fa became leader at term 2
	raft2021/08/13 21:07:56 INFO: raft.node: 45ea9d8f303c08fa elected leader 45ea9d8f303c08fa at term 2
	2021-08-13 21:07:56.877936 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 21:07:56.883311 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 21:07:56.883457 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 21:07:56.883490 I | etcdserver: published {Name:embed-certs-20210813205917-30853 ClientURLs:[https://192.168.39.156:2379]} to cluster d1f5bcbb1e4f2572
	2021-08-13 21:07:56.885922 I | embed: ready to serve client requests
	2021-08-13 21:07:56.889110 I | embed: ready to serve client requests
	2021-08-13 21:07:56.890628 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 21:07:56.906080 I | embed: serving client requests on 192.168.39.156:2379
	2021-08-13 21:08:21.444097 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:08:23.770534 W | etcdserver: read-only range request "key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-certs\" " with result "range_response_count:0 size:5" took too long (116.65398ms) to execute
	2021-08-13 21:08:24.693511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:08:29.027443 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-zdlnb\" " with result "range_response_count:1 size:4480" took too long (164.618661ms) to execute
	2021-08-13 21:08:34.692683 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  21:08:56 up 7 min,  0 users,  load average: 1.71, 0.81, 0.39
	Linux embed-certs-20210813205917-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b] <==
	* I0813 21:08:01.721328       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0813 21:08:01.756931       1 controller.go:611] quota admission added evaluator for: namespaces
	I0813 21:08:02.511279       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 21:08:02.511302       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 21:08:02.529516       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 21:08:02.533470       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 21:08:02.533580       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 21:08:03.377568       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 21:08:03.429283       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 21:08:03.542496       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.39.156]
	I0813 21:08:03.543803       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 21:08:03.562423       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 21:08:04.190221       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 21:08:05.372658       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 21:08:05.451606       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 21:08:10.992596       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 21:08:18.341571       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 21:08:18.791922       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0813 21:08:25.206710       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 21:08:25.206984       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 21:08:25.206999       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 21:08:31.825652       1 client.go:360] parsed scheme: "passthrough"
	I0813 21:08:31.825702       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 21:08:31.825721       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50] <==
	* I0813 21:08:22.742977       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0813 21:08:22.840917       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 21:08:22.915612       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-qc7sb"
	I0813 21:08:23.384701       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 21:08:23.448806       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:08:23.485617       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 21:08:23.498760       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.499550       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:08:23.520670       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:08:23.537147       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.541926       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:08:23.549645       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.551660       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:08:23.591345       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.591709       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:08:23.594311       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.594769       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:08:23.615291       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.615656       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:08:23.628215       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.628577       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:08:23.644116       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.644266       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:08:23.672418       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-bvcl6"
	I0813 21:08:23.822383       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-77xxt"
	
	* 
	* ==> kube-proxy [cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945] <==
	* I0813 21:08:20.240227       1 node.go:172] Successfully retrieved node IP: 192.168.39.156
	I0813 21:08:20.240302       1 server_others.go:140] Detected node IP 192.168.39.156
	W0813 21:08:20.240359       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 21:08:20.337066       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 21:08:20.337097       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 21:08:20.337319       1 server_others.go:212] Using iptables Proxier.
	I0813 21:08:20.339187       1 server.go:643] Version: v1.21.3
	I0813 21:08:20.343565       1 config.go:315] Starting service config controller
	I0813 21:08:20.343594       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 21:08:20.343635       1 config.go:224] Starting endpoint slice config controller
	I0813 21:08:20.343640       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 21:08:20.348629       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 21:08:20.356382       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 21:08:20.444457       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 21:08:20.444544       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982] <==
	* E0813 21:08:01.748957       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:08:01.752246       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:08:01.752432       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:08:01.752673       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:08:01.752810       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:08:01.753010       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:08:01.753152       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:08:01.753218       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:08:01.753483       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:08:01.753591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:08:01.753659       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:08:01.758943       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:08:01.762207       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:08:02.640197       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:08:02.660929       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:08:02.708658       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:08:02.724689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:08:02.759438       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:08:02.777077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:08:02.906525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:08:02.956482       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:08:02.963077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:08:02.985745       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:08:03.253517       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0813 21:08:06.028996       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:01:48 UTC, end at Fri 2021-08-13 21:08:57 UTC. --
	Aug 13 21:08:23 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:23.947151    6437 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvjft\" (UniqueName: \"kubernetes.io/projected/87e4c5d0-a1f9-4f5a-9c80-aba83055f746-kube-api-access-bvjft\") pod \"dashboard-metrics-scraper-8685c45546-bvcl6\" (UID: \"87e4c5d0-a1f9-4f5a-9c80-aba83055f746\") "
	Aug 13 21:08:23 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:23.947354    6437 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/87e4c5d0-a1f9-4f5a-9c80-aba83055f746-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-bvcl6\" (UID: \"87e4c5d0-a1f9-4f5a-9c80-aba83055f746\") "
	Aug 13 21:08:24 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:24.048395    6437 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9w5v\" (UniqueName: \"kubernetes.io/projected/6e067ab8-6535-4984-8dcf-037619871a7e-kube-api-access-j9w5v\") pod \"kubernetes-dashboard-6fcdf4f6d-77xxt\" (UID: \"6e067ab8-6535-4984-8dcf-037619871a7e\") "
	Aug 13 21:08:24 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:24.048733    6437 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6e067ab8-6535-4984-8dcf-037619871a7e-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-77xxt\" (UID: \"6e067ab8-6535-4984-8dcf-037619871a7e\") "
	Aug 13 21:08:24 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:24.563317    6437 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:08:24 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:24.565194    6437 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:08:24 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:24.565448    6437 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8pp2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handle
r{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]V
olumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-qc7sb_kube-system(43aa1ab2-5284-4d76-b826-12fd50a0ba54): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:08:24 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:24.566602    6437 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-qc7sb" podUID=43aa1ab2-5284-4d76-b826-12fd50a0ba54
	Aug 13 21:08:25 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:25.487449    6437 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-qc7sb" podUID=43aa1ab2-5284-4d76-b826-12fd50a0ba54
	Aug 13 21:08:32 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:32.346799    6437 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/87e4c5d0-a1f9-4f5a-9c80-aba83055f746/etc-hosts with error exit status 1" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-bvcl6"
	Aug 13 21:08:32 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:32.370480    6437 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/43aa1ab2-5284-4d76-b826-12fd50a0ba54/etc-hosts with error exit status 1" pod="kube-system/metrics-server-7c784ccb57-qc7sb"
	Aug 13 21:08:35 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:35.082031    6437 scope.go:111] "RemoveContainer" containerID="ce36fb53702404bc5cdd5e36a22d39291db911cc24e730e55672b9450b4bc9e0"
	Aug 13 21:08:36 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:36.090037    6437 scope.go:111] "RemoveContainer" containerID="ce36fb53702404bc5cdd5e36a22d39291db911cc24e730e55672b9450b4bc9e0"
	Aug 13 21:08:36 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:36.090450    6437 scope.go:111] "RemoveContainer" containerID="1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271"
	Aug 13 21:08:36 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:36.090682    6437 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-bvcl6_kubernetes-dashboard(87e4c5d0-a1f9-4f5a-9c80-aba83055f746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-bvcl6" podUID=87e4c5d0-a1f9-4f5a-9c80-aba83055f746
	Aug 13 21:08:36 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:36.209029    6437 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:08:36 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:36.209066    6437 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:08:36 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:36.209169    6437 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8pp2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handle
r{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]V
olumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-qc7sb_kube-system(43aa1ab2-5284-4d76-b826-12fd50a0ba54): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:08:36 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:36.209206    6437 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-qc7sb" podUID=43aa1ab2-5284-4d76-b826-12fd50a0ba54
	Aug 13 21:08:37 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:37.104359    6437 scope.go:111] "RemoveContainer" containerID="1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271"
	Aug 13 21:08:37 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:37.104734    6437 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-bvcl6_kubernetes-dashboard(87e4c5d0-a1f9-4f5a-9c80-aba83055f746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-bvcl6" podUID=87e4c5d0-a1f9-4f5a-9c80-aba83055f746
	Aug 13 21:08:42 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:42.622230    6437 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/43aa1ab2-5284-4d76-b826-12fd50a0ba54/etc-hosts with error exit status 1" pod="kube-system/metrics-server-7c784ccb57-qc7sb"
	Aug 13 21:08:43 embed-certs-20210813205917-30853 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:08:43 embed-certs-20210813205917-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:08:43 embed-certs-20210813205917-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f] <==
	* 2021/08/13 21:08:26 Using namespace: kubernetes-dashboard
	2021/08/13 21:08:26 Using in-cluster config to connect to apiserver
	2021/08/13 21:08:26 Using secret token for csrf signing
	2021/08/13 21:08:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:08:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:08:26 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 21:08:26 Generating JWE encryption key
	2021/08/13 21:08:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:08:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:08:27 Initializing JWE encryption key from synchronized object
	2021/08/13 21:08:27 Creating in-cluster Sidecar client
	2021/08/13 21:08:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:08:27 Serving insecurely on HTTP port: 9090
	2021/08/13 21:08:26 Starting overwatch
	
	* 
	* ==> storage-provisioner [1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce] <==
	* I0813 21:08:25.762110       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 21:08:25.811183       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 21:08:25.817968       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 21:08:25.857352       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 21:08:25.858970       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813205917-30853_5a7e4761-9efd-4312-9155-268d1305c244!
	I0813 21:08:25.860948       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7e524bc5-c9ee-4bad-a746-e755f69879e4", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210813205917-30853_5a7e4761-9efd-4312-9155-268d1305c244 became leader
	I0813 21:08:25.975265       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813205917-30853_5a7e4761-9efd-4312-9155-268d1305c244!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 21:08:56.867344   12189 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813205917-30853 -n embed-certs-20210813205917-30853

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813205917-30853 -n embed-certs-20210813205917-30853: exit status 2 (264.148618ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210813205917-30853 logs -n 25
E0813 21:09:01.472967   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p embed-certs-20210813205917-30853 logs -n 25: exit status 110 (11.302105831s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| ssh     | -p bridge-20210813204703-30853                    | bridge-20210813204703-30853                     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:00 UTC | Fri, 13 Aug 2021 20:59:00 UTC |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                               |                               |
	| ssh     | -p                                                | flannel-20210813204703-30853                    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:03 UTC | Fri, 13 Aug 2021 20:59:03 UTC |
	|         | flannel-20210813204703-30853                      |                                                 |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                               |                               |
	| delete  | -p bridge-20210813204703-30853                    | bridge-20210813204703-30853                     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:14 UTC | Fri, 13 Aug 2021 20:59:15 UTC |
	| delete  | -p                                                | flannel-20210813204703-30853                    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:15 UTC | Fri, 13 Aug 2021 20:59:17 UTC |
	|         | flannel-20210813204703-30853                      |                                                 |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:23 UTC | Fri, 13 Aug 2021 21:00:44 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:56 UTC | Fri, 13 Aug 2021 21:00:57 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:57 UTC | Fri, 13 Aug 2021 21:01:00 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:00 UTC | Fri, 13 Aug 2021 21:01:00 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210813204600-30853         | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:01 UTC | Fri, 13 Aug 2021 21:01:02 UTC |
	|         | kubernetes-upgrade-20210813204600-30853           |                                                 |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210813210102-30853      | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:02 UTC | Fri, 13 Aug 2021 21:01:02 UTC |
	|         | disable-driver-mounts-20210813210102-30853        |                                                 |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:17 UTC | Fri, 13 Aug 2021 21:01:05 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:18 UTC | Fri, 13 Aug 2021 21:01:19 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:19 UTC | Fri, 13 Aug 2021 21:01:23 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:23 UTC | Fri, 13 Aug 2021 21:01:23 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:15 UTC | Fri, 13 Aug 2021 21:02:15 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:27 UTC | Fri, 13 Aug 2021 21:02:28 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:02 UTC | Fri, 13 Aug 2021 21:03:15 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                 |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio           |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:26 UTC | Fri, 13 Aug 2021 21:03:27 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:27 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:28 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:32 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:23 UTC | Fri, 13 Aug 2021 21:08:32 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:08:42 UTC | Fri, 13 Aug 2021 21:08:43 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                 |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:00 UTC | Fri, 13 Aug 2021 21:08:52 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                 |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:03:32
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:03:32.257678   11600 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:03:32.257760   11600 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:03:32.257764   11600 out.go:311] Setting ErrFile to fd 2...
	I0813 21:03:32.257767   11600 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:03:32.257889   11600 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:03:32.258149   11600 out.go:305] Setting JSON to false
	I0813 21:03:32.297164   11600 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":9974,"bootTime":1628878638,"procs":184,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:03:32.297442   11600 start.go:121] virtualization: kvm guest
	I0813 21:03:32.300208   11600 out.go:177] * [no-preload-20210813205915-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:03:32.301763   11600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:03:32.300370   11600 notify.go:169] Checking for updates...
	I0813 21:03:32.303324   11600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:03:32.304875   11600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:03:32.306390   11600 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:03:32.306988   11600 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:03:32.307576   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:03:32.307638   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:03:32.319235   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34929
	I0813 21:03:32.319644   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:03:32.320320   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:03:32.320347   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:03:32.320748   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:03:32.320979   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:03:32.321189   11600 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:03:32.321646   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:03:32.321692   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:03:32.332966   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45825
	I0813 21:03:32.333332   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:03:32.333819   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:03:32.333847   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:03:32.334199   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:03:32.334372   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:03:32.365034   11600 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 21:03:32.365061   11600 start.go:278] selected driver: kvm2
	I0813 21:03:32.365067   11600 start.go:751] validating driver "kvm2" against &{Name:no-preload-20210813205915-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.107 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:03:32.365197   11600 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:03:32.367047   11600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.367426   11600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:03:32.378154   11600 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:03:32.378447   11600 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 21:03:32.378474   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:03:32.378482   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:03:32.378489   11600 start_flags.go:277] config:
	{Name:no-preload-20210813205915-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.107 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Li
stenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:03:32.378585   11600 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:30.512688   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:33.010993   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:32.670472   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:35.171315   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:30.963285   11447 out.go:177] * Restarting existing kvm2 VM for "default-k8s-different-port-20210813210102-30853" ...
	I0813 21:03:30.963310   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Start
	I0813 21:03:30.963467   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Ensuring networks are active...
	I0813 21:03:30.965431   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Ensuring network default is active
	I0813 21:03:30.965733   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Ensuring network mk-default-k8s-different-port-20210813210102-30853 is active
	I0813 21:03:30.966083   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Getting domain xml...
	I0813 21:03:30.968061   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Creating domain...
	I0813 21:03:31.416170   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Waiting to get IP...
	I0813 21:03:31.417365   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.418005   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has current primary IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.418042   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Found IP for machine: 192.168.50.136
	I0813 21:03:31.418064   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Reserving static IP address...
	I0813 21:03:31.418520   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "default-k8s-different-port-20210813210102-30853", mac: "52:54:00:37:ca:98", ip: "192.168.50.136"} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:01:32 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:31.418572   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | skip adding static IP to network mk-default-k8s-different-port-20210813210102-30853 - found existing host DHCP lease matching {name: "default-k8s-different-port-20210813210102-30853", mac: "52:54:00:37:ca:98", ip: "192.168.50.136"}
	I0813 21:03:31.418592   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Reserved static IP address: 192.168.50.136
	I0813 21:03:31.418609   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Waiting for SSH to be available...
	I0813 21:03:31.418628   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Getting to WaitForSSH function...
	I0813 21:03:31.424645   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.425050   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:01:32 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:31.425182   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.425389   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH client type: external
	I0813 21:03:31.425422   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa (-rw-------)
	I0813 21:03:31.425464   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:03:31.425482   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | About to run SSH command:
	I0813 21:03:31.425509   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | exit 0
	I0813 21:03:32.380458   11600 out.go:177] * Starting control plane node no-preload-20210813205915-30853 in cluster no-preload-20210813205915-30853
	I0813 21:03:32.380479   11600 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:03:32.380628   11600 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/config.json ...
	I0813 21:03:32.380658   11600 cache.go:108] acquiring lock: {Name:mkb38baead8d508ff836651dee18a7788cf32c81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380644   11600 cache.go:108] acquiring lock: {Name:mk46180cf67d5c541fa2597ef8e0122b51c3d66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380670   11600 cache.go:108] acquiring lock: {Name:mk7bb3b696fd3372110b0be599d95315e027c7ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380696   11600 cache.go:108] acquiring lock: {Name:mkf1d6f5d79a8fed4d2cc99505f5f3464b88e46a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380719   11600 cache.go:108] acquiring lock: {Name:mk828c96511ca39b5ec24da9b6afedd4727bdcf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380743   11600 cache.go:108] acquiring lock: {Name:mk03e6bcc333bfad143239419641099a94fed11e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380784   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 21:03:32.380790   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0813 21:03:32.380787   11600 cache.go:108] acquiring lock: {Name:mk928ab7caca14c2ebd27b364dc38d466ea61870 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380747   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0813 21:03:32.380809   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 21:03:32.380803   11600 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 161.844µs
	I0813 21:03:32.380822   11600 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 21:03:32.380808   11600 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 149.17µs
	I0813 21:03:32.380819   11600 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 164.006µs
	I0813 21:03:32.380839   11600 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0813 21:03:32.380837   11600 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:03:32.380848   11600 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0813 21:03:32.380801   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0813 21:03:32.380838   11600 cache.go:108] acquiring lock: {Name:mk3d501986e0e48ddd0db3c6e93347910f1116e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380854   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0813 21:03:32.380853   11600 cache.go:108] acquiring lock: {Name:mkf7939d465d516c835d7d7703c105943f1ade9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380867   11600 start.go:313] acquiring machines lock for no-preload-20210813205915-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:03:32.380868   11600 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 155.968µs
	I0813 21:03:32.380881   11600 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0813 21:03:32.380876   11600 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 155.847µs
	I0813 21:03:32.380760   11600 cache.go:108] acquiring lock: {Name:mkec6e53ab9796f80ec65d6b99a6c3ee881fedd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380890   11600 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380896   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0813 21:03:32.380899   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0813 21:03:32.380841   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0813 21:03:32.380909   11600 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 73.516µs
	I0813 21:03:32.380913   11600 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 62.387µs
	I0813 21:03:32.380921   11600 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380939   11600 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380925   11600 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 136.425µs
	I0813 21:03:32.380966   11600 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380936   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 21:03:32.380982   11600 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 225.197µs
	I0813 21:03:32.380995   11600 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 21:03:32.380828   11600 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.9µs
	I0813 21:03:32.381004   11600 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 21:03:32.381012   11600 cache.go:88] Successfully saved all images to host disk.
	I0813 21:03:35.012590   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:37.514197   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:37.669098   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:40.168374   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:40.013348   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:42.014535   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:42.670990   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:44.671751   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:43.440320   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | SSH cmd err, output: exit status 255: 
	I0813 21:03:43.440353   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0813 21:03:43.440363   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | command : exit 0
	I0813 21:03:43.440369   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | err     : exit status 255
	I0813 21:03:43.440381   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | output  : 
	I0813 21:03:47.896090   11600 start.go:317] acquired machines lock for "no-preload-20210813205915-30853" in 15.515202861s
	I0813 21:03:47.896143   11600 start.go:93] Skipping create...Using existing machine configuration
	I0813 21:03:47.896154   11600 fix.go:55] fixHost starting: 
	I0813 21:03:47.896500   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:03:47.896553   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:03:47.909531   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37953
	I0813 21:03:47.909942   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:03:47.910569   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:03:47.910588   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:03:47.910953   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:03:47.911154   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:03:47.911327   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:03:47.913763   11600 fix.go:108] recreateIfNeeded on no-preload-20210813205915-30853: state=Stopped err=<nil>
	I0813 21:03:47.913791   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	W0813 21:03:47.913946   11600 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 21:03:44.511774   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:46.514028   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:48.515447   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:47.170765   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:49.174655   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:46.440683   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Getting to WaitForSSH function...
	I0813 21:03:46.445948   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.446304   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.446340   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.446496   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH client type: external
	I0813 21:03:46.446533   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa (-rw-------)
	I0813 21:03:46.446579   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:03:46.446601   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | About to run SSH command:
	I0813 21:03:46.446618   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | exit 0
	I0813 21:03:46.582984   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 21:03:46.583312   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetConfigRaw
	I0813 21:03:46.584076   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:03:46.589266   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.589559   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.589588   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.589810   11447 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/config.json ...
	I0813 21:03:46.590017   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:46.590212   11447 machine.go:88] provisioning docker machine ...
	I0813 21:03:46.590232   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:46.590407   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetMachineName
	I0813 21:03:46.590545   11447 buildroot.go:166] provisioning hostname "default-k8s-different-port-20210813210102-30853"
	I0813 21:03:46.590576   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetMachineName
	I0813 21:03:46.590701   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.595270   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.595544   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.595577   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.595711   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:46.595884   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.596013   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.596117   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:46.596285   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:46.596463   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:46.596478   11447 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210813210102-30853 && echo "default-k8s-different-port-20210813210102-30853" | sudo tee /etc/hostname
	I0813 21:03:46.733223   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210813210102-30853
	
	I0813 21:03:46.733252   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.739002   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.739323   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.739359   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.739481   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:46.739690   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.739849   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.739990   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:46.740161   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:46.740320   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:46.740349   11447 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210813210102-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210813210102-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210813210102-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:03:46.872322   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:03:46.872366   11447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:03:46.872403   11447 buildroot.go:174] setting up certificates
	I0813 21:03:46.872413   11447 provision.go:83] configureAuth start
	I0813 21:03:46.872433   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetMachineName
	I0813 21:03:46.872715   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:03:46.878075   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.878404   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.878459   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.878540   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.882767   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.883077   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.883108   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.883225   11447 provision.go:138] copyHostCerts
	I0813 21:03:46.883299   11447 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:03:46.883314   11447 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:03:46.883398   11447 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:03:46.883517   11447 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:03:46.883530   11447 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:03:46.883563   11447 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:03:46.883642   11447 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:03:46.883654   11447 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:03:46.883682   11447 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 21:03:46.883763   11447 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210813210102-30853 san=[192.168.50.136 192.168.50.136 localhost 127.0.0.1 minikube default-k8s-different-port-20210813210102-30853]
	I0813 21:03:46.987158   11447 provision.go:172] copyRemoteCerts
	I0813 21:03:46.987214   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:03:46.987238   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.992216   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.992440   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.992475   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.992656   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:46.992817   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.992969   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:46.993066   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:47.083216   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0813 21:03:47.100865   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:03:47.117328   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:03:47.134074   11447 provision.go:86] duration metric: configureAuth took 261.642322ms
	I0813 21:03:47.134094   11447 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:03:47.134262   11447 config.go:177] Loaded profile config "default-k8s-different-port-20210813210102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:03:47.134353   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.139472   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.139780   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.139807   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.139944   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.140097   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.140275   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.140411   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.140599   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:47.140769   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:47.140790   11447 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 21:03:47.633895   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 21:03:47.633930   11447 machine.go:91] provisioned docker machine in 1.043703131s
	I0813 21:03:47.633942   11447 start.go:267] post-start starting for "default-k8s-different-port-20210813210102-30853" (driver="kvm2")
	I0813 21:03:47.633950   11447 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:03:47.633971   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.634293   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:03:47.634328   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.639277   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.639636   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.639663   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.639786   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.639947   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.640111   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.640242   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:47.734400   11447 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:03:47.740052   11447 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:03:47.740071   11447 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:03:47.740130   11447 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:03:47.740231   11447 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 21:03:47.740344   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:03:47.747174   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:03:47.764416   11447 start.go:270] post-start completed in 130.462296ms
	I0813 21:03:47.764450   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.764711   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.770040   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.770384   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.770431   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.770530   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.770719   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.770894   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.771070   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.771253   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:47.771444   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:47.771459   11447 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:03:47.895861   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888627.837623344
	
	I0813 21:03:47.895892   11447 fix.go:212] guest clock: 1628888627.837623344
	I0813 21:03:47.895903   11447 fix.go:225] Guest: 2021-08-13 21:03:47.837623344 +0000 UTC Remote: 2021-08-13 21:03:47.764694239 +0000 UTC m=+16.980843358 (delta=72.929105ms)
	I0813 21:03:47.895929   11447 fix.go:196] guest clock delta is within tolerance: 72.929105ms
	I0813 21:03:47.895937   11447 fix.go:57] fixHost completed within 16.950003029s
	I0813 21:03:47.895942   11447 start.go:80] releasing machines lock for "default-k8s-different-port-20210813210102-30853", held for 16.950031669s
	I0813 21:03:47.896001   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.896297   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:03:47.901493   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.901838   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.901870   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.902050   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.902228   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.902715   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.902976   11447 ssh_runner.go:149] Run: systemctl --version
	I0813 21:03:47.902995   11447 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:03:47.903007   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.903040   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.909125   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.909422   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.909452   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.909630   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.909813   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.909935   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.910059   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:47.910088   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.910489   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.910527   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.910654   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.910777   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.910927   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.911072   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:48.006087   11447 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 21:03:48.006215   11447 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:03:47.916188   11600 out.go:177] * Restarting existing kvm2 VM for "no-preload-20210813205915-30853" ...
	I0813 21:03:47.916218   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Start
	I0813 21:03:47.916374   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring networks are active...
	I0813 21:03:47.918363   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring network default is active
	I0813 21:03:47.918666   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring network mk-no-preload-20210813205915-30853 is active
	I0813 21:03:47.919177   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Getting domain xml...
	I0813 21:03:47.921207   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Creating domain...
	I0813 21:03:48.385941   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Waiting to get IP...
	I0813 21:03:48.387086   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.387686   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Found IP for machine: 192.168.105.107
	I0813 21:03:48.387718   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Reserving static IP address...
	I0813 21:03:48.387738   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has current primary IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.388204   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "no-preload-20210813205915-30853", mac: "52:54:00:60:d2:3d", ip: "192.168.105.107"} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 21:59:33 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:03:48.388236   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Reserved static IP address: 192.168.105.107
	I0813 21:03:48.388276   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | skip adding static IP to network mk-no-preload-20210813205915-30853 - found existing host DHCP lease matching {name: "no-preload-20210813205915-30853", mac: "52:54:00:60:d2:3d", ip: "192.168.105.107"}
	I0813 21:03:48.388306   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Getting to WaitForSSH function...
	I0813 21:03:48.388326   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Waiting for SSH to be available...
	I0813 21:03:48.393946   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.394418   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 21:59:33 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:03:48.394445   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.394706   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH client type: external
	I0813 21:03:48.394790   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa (-rw-------)
	I0813 21:03:48.394865   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:03:48.394885   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | About to run SSH command:
	I0813 21:03:48.394902   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | exit 0
	I0813 21:03:51.014322   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:53.517299   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:51.667636   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:53.672798   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:52.032310   11447 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.026067051s)
	I0813 21:03:52.032472   11447 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 21:03:52.032533   11447 ssh_runner.go:149] Run: which lz4
	I0813 21:03:52.036917   11447 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:03:52.041879   11447 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:03:52.041911   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0813 21:03:54.836023   11447 crio.go:362] Took 2.799141 seconds to copy over tarball
	I0813 21:03:54.836104   11447 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:03:56.016199   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:58.747725   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:56.174092   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:58.745387   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:57.599639   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | SSH cmd err, output: exit status 255: 
	I0813 21:03:58.136181   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0813 21:03:58.136210   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | command : exit 0
	I0813 21:03:58.136247   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | err     : exit status 255
	I0813 21:03:58.136301   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | output  : 
	I0813 21:04:00.599792   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Getting to WaitForSSH function...
	I0813 21:04:00.606127   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:00.606561   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:00.606599   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:00.606684   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH client type: external
	I0813 21:04:00.606710   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa (-rw-------)
	I0813 21:04:00.606759   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:04:00.606779   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | About to run SSH command:
	I0813 21:04:00.606791   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | exit 0
	I0813 21:04:01.865012   11447 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (7.028876371s)
	I0813 21:04:01.865051   11447 crio.go:369] Took 7.028990 seconds t extract the tarball
	I0813 21:04:01.865065   11447 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:04:01.909459   11447 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 21:04:01.921741   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 21:04:01.931836   11447 docker.go:153] disabling docker service ...
	I0813 21:04:01.931885   11447 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:04:01.943769   11447 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:04:01.957001   11447 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:04:02.141489   11447 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:04:02.286672   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:04:02.301487   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:04:02.316482   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 21:04:02.324481   11447 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:04:02.332086   11447 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:04:02.332135   11447 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:04:02.348397   11447 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:04:02.355704   11447 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:04:02.519419   11447 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 21:04:02.853377   11447 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 21:04:02.853455   11447 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 21:04:02.859109   11447 start.go:413] Will wait 60s for crictl version
	I0813 21:04:02.859179   11447 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:04:02.895788   11447 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 21:04:02.895871   11447 ssh_runner.go:149] Run: crio --version
	I0813 21:04:02.973856   11447 ssh_runner.go:149] Run: crio --version
	I0813 21:04:01.014560   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:03.513509   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:01.169481   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:04.824663   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:04.802040   11447 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 21:04:04.802102   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:04:04.808733   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:04:04.809248   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:04:04.809286   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:04:04.809574   11447 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0813 21:04:04.815288   11447 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:04.828595   11447 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 21:04:04.828664   11447 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:04:04.877574   11447 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:04:04.877604   11447 crio.go:333] Images already preloaded, skipping extraction
	I0813 21:04:04.877660   11447 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:04:04.914222   11447 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:04:04.914249   11447 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:04:04.914336   11447 ssh_runner.go:149] Run: crio config
	I0813 21:04:05.157389   11447 cni.go:93] Creating CNI manager for ""
	I0813 21:04:05.157412   11447 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:04:05.157424   11447 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:04:05.157439   11447 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.136 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210813210102-30853 NodeName:default-k8s-different-port-20210813210102-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.136
CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:04:05.157622   11447 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.136
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "default-k8s-different-port-20210813210102-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:04:05.157727   11447 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=default-k8s-different-port-20210813210102-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.136 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813210102-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0813 21:04:05.157774   11447 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 21:04:05.167087   11447 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:04:05.167155   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:04:05.175473   11447 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (528 bytes)
	I0813 21:04:05.188753   11447 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 21:04:05.201467   11447 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0813 21:04:05.215461   11447 ssh_runner.go:149] Run: grep 192.168.50.136	control-plane.minikube.internal$ /etc/hosts
	I0813 21:04:05.220200   11447 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:05.231726   11447 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853 for IP: 192.168.50.136
	I0813 21:04:05.231797   11447 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:04:05.231825   11447 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:04:05.231898   11447 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.key
	I0813 21:04:05.231928   11447 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/apiserver.key.cb5546de
	I0813 21:04:05.231952   11447 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/proxy-client.key
	I0813 21:04:05.232111   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 21:04:05.232165   11447 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 21:04:05.232188   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:04:05.232232   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:04:05.232271   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:04:05.232307   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 21:04:05.232379   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:04:05.233804   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:04:05.253715   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:04:05.273351   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:04:05.290830   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 21:04:05.308416   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:04:05.326529   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 21:04:05.346664   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:04:05.364492   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 21:04:05.381949   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:04:05.399680   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 21:04:05.419759   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 21:04:05.438209   11447 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:04:05.450680   11447 ssh_runner.go:149] Run: openssl version
	I0813 21:04:05.457245   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:04:05.465670   11447 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:05.470976   11447 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:05.471018   11447 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:05.477477   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:04:05.486446   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 21:04:05.494612   11447 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 21:04:05.499391   11447 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 21:04:05.499438   11447 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 21:04:05.505622   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 21:04:05.514421   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 21:04:05.523408   11447 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 21:04:05.528337   11447 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 21:04:05.528382   11447 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 21:04:05.535765   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:04:05.544593   11447 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210813210102-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813210102-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.136 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:tr
ue system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:04:05.544684   11447 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 21:04:05.544726   11447 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:05.585256   11447 cri.go:76] found id: ""
	I0813 21:04:05.585334   11447 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:04:05.593681   11447 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:04:05.593711   11447 kubeadm.go:600] restartCluster start
	I0813 21:04:05.593760   11447 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:04:05.602117   11447 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:05.603061   11447 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210813210102-30853" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:04:05.603385   11447 kubeconfig.go:128] "default-k8s-different-port-20210813210102-30853" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:04:05.604147   11447 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:04:05.606733   11447 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:04:05.614257   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:05.614297   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:05.624492   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:02.775071   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 21:04:02.775420   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetConfigRaw
	I0813 21:04:02.776115   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:02.782201   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.782674   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:02.782712   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.783141   11600 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/config.json ...
	I0813 21:04:02.783367   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:02.783571   11600 machine.go:88] provisioning docker machine ...
	I0813 21:04:02.783598   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:02.783770   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetMachineName
	I0813 21:04:02.783946   11600 buildroot.go:166] provisioning hostname "no-preload-20210813205915-30853"
	I0813 21:04:02.783971   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetMachineName
	I0813 21:04:02.784147   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:02.789849   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.790287   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:02.790320   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.790441   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:02.790578   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.790777   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.790928   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:02.791095   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:02.791315   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:02.791336   11600 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210813205915-30853 && echo "no-preload-20210813205915-30853" | sudo tee /etc/hostname
	I0813 21:04:02.946559   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210813205915-30853
	
	I0813 21:04:02.946596   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:02.952957   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.953358   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:02.953393   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.953568   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:02.953745   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.953960   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.954167   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:02.954385   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:02.954624   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:02.954665   11600 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210813205915-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210813205915-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210813205915-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:04:03.094292   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:04:03.094324   11600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:04:03.094356   11600 buildroot.go:174] setting up certificates
	I0813 21:04:03.094369   11600 provision.go:83] configureAuth start
	I0813 21:04:03.094384   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetMachineName
	I0813 21:04:03.094688   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:03.100354   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.100706   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.100739   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.100946   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:03.105867   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.106237   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.106310   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.106463   11600 provision.go:138] copyHostCerts
	I0813 21:04:03.106530   11600 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:04:03.106543   11600 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:04:03.106590   11600 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:04:03.106682   11600 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:04:03.106693   11600 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:04:03.106720   11600 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:04:03.106783   11600 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:04:03.106793   11600 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:04:03.106815   11600 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 21:04:03.106882   11600 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210813205915-30853 san=[192.168.105.107 192.168.105.107 localhost 127.0.0.1 minikube no-preload-20210813205915-30853]
	I0813 21:04:03.232637   11600 provision.go:172] copyRemoteCerts
	I0813 21:04:03.232735   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:04:03.232781   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:03.238750   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.239227   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.239262   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.239441   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:03.239634   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:03.239802   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:03.239979   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:03.330067   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:04:03.347432   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 21:04:03.580187   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:04:03.733835   11600 provision.go:86] duration metric: configureAuth took 639.447362ms
	I0813 21:04:03.733873   11600 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:04:03.734092   11600 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:04:03.734225   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:03.740654   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.741046   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.741091   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.741217   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:03.741420   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:03.741586   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:03.741748   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:03.741941   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:03.742078   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:03.742093   11600 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 21:04:04.399833   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 21:04:04.399867   11600 machine.go:91] provisioned docker machine in 1.616277375s
	I0813 21:04:04.399881   11600 start.go:267] post-start starting for "no-preload-20210813205915-30853" (driver="kvm2")
	I0813 21:04:04.399888   11600 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:04:04.399909   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.400282   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:04:04.400324   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.406533   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.406945   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.406987   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.407240   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.407441   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.407578   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.407746   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:04.498949   11600 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:04:04.503867   11600 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:04:04.503896   11600 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:04:04.503972   11600 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:04:04.504097   11600 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 21:04:04.504223   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:04:04.511733   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:04:04.528408   11600 start.go:270] post-start completed in 128.513758ms
	I0813 21:04:04.528443   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.528707   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.534254   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.534663   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.534695   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.534799   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.534987   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.535140   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.535279   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.535426   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:04.535597   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:04.535608   11600 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:04:04.663945   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888644.593571707
	
	I0813 21:04:04.663967   11600 fix.go:212] guest clock: 1628888644.593571707
	I0813 21:04:04.663974   11600 fix.go:225] Guest: 2021-08-13 21:04:04.593571707 +0000 UTC Remote: 2021-08-13 21:04:04.528687546 +0000 UTC m=+32.319635142 (delta=64.884161ms)
	I0813 21:04:04.663992   11600 fix.go:196] guest clock delta is within tolerance: 64.884161ms
	I0813 21:04:04.663998   11600 fix.go:57] fixHost completed within 16.76784432s
	I0813 21:04:04.664002   11600 start.go:80] releasing machines lock for "no-preload-20210813205915-30853", held for 16.76787935s
	I0813 21:04:04.664032   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.664301   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:04.670385   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.670693   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.670728   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.670905   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.671084   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.671497   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.671741   11600 ssh_runner.go:149] Run: systemctl --version
	I0813 21:04:04.671770   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.671781   11600 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:04:04.671828   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.677842   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.677920   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.678239   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.678271   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.678303   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.678327   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.678385   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.678537   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.678601   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.678680   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.678746   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.678799   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.678866   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:04.678918   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:04.778153   11600 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:04:04.778247   11600 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 21:04:04.790123   11600 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 21:04:04.799742   11600 docker.go:153] disabling docker service ...
	I0813 21:04:04.799795   11600 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:04:04.814660   11600 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:04:04.826371   11600 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:04:04.984940   11600 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:04:05.134330   11600 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:04:05.146967   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:04:05.162919   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 21:04:05.171969   11600 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:04:05.178773   11600 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:04:05.178830   11600 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:04:05.195828   11600 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:04:05.202754   11600 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:04:05.337419   11600 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 21:04:05.559682   11600 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 21:04:05.559752   11600 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 21:04:05.566062   11600 start.go:413] Will wait 60s for crictl version
	I0813 21:04:05.566138   11600 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:04:05.601921   11600 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 21:04:05.602001   11600 ssh_runner.go:149] Run: crio --version
	I0813 21:04:05.842661   11600 ssh_runner.go:149] Run: crio --version
	I0813 21:04:05.956395   11600 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.2 ...
	I0813 21:04:05.956450   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:05.962605   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:05.962975   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:05.962999   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:05.963185   11600 ssh_runner.go:149] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0813 21:04:05.968381   11600 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:05.979746   11600 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:04:05.979790   11600 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:04:06.037577   11600 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 21:04:06.037602   11600 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 k8s.gcr.io/kube-proxy:v1.22.0-rc.0 k8s.gcr.io/pause:3.4.1 k8s.gcr.io/etcd:3.4.13-3 k8s.gcr.io/coredns/coredns:v1.8.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0813 21:04:06.037684   11600 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 21:04:06.037756   11600 image.go:133] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.0
	I0813 21:04:06.037772   11600 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:06.037684   11600 image.go:133] retrieving image: k8s.gcr.io/pause:3.4.1
	I0813 21:04:06.037785   11600 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:06.037762   11600 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-3
	I0813 21:04:06.037738   11600 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:06.037735   11600 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:06.037741   11600 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 21:04:06.037767   11600 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:06.039362   11600 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.22.0-rc.0: Error response from daemon: reference does not exist
	I0813 21:04:06.053753   11600 image.go:171] found k8s.gcr.io/pause:3.4.1 locally: &{Image:0xc000d620e0}
	I0813 21:04:06.053840   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/pause:3.4.1
	I0813 21:04:06.454088   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:06.627170   11600 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc000a3e0e0}
	I0813 21:04:06.627262   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:06.677125   11600 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" does not exist at hash "ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c" in container runtime
	I0813 21:04:06.677177   11600 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:06.677243   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:06.772729   11600 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc000a3e3e0}
	I0813 21:04:06.772826   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 21:04:06.829141   11600 image.go:171] found k8s.gcr.io/coredns/coredns:v1.8.0 locally: &{Image:0xc00142e1e0}
	I0813 21:04:06.829237   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns/coredns:v1.8.0
	I0813 21:04:06.902889   11600 cache_images.go:106] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0813 21:04:06.902989   11600 cri.go:205] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:06.903035   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:06.902933   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:07.109713   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.109813   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:07.109896   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.117259   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0 (exists)
	I0813 21:04:07.117279   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.117314   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.171175   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0813 21:04:07.171310   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0813 21:04:05.516944   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:08.013394   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:07.172226   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:09.188184   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:05.824992   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:05.825077   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:05.837175   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.025601   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.025691   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.036326   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.225644   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.225742   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.238574   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.425637   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.425737   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.438316   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.625622   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.625698   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.643437   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.824708   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.824784   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.840790   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.024978   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.025048   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.042237   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.225613   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.225690   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.238533   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.424924   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.425004   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.437239   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.625345   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.625418   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.643925   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.825147   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.825246   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.839517   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.024742   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.024831   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.037540   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.224652   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.224733   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.237758   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.425032   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.425121   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.438563   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.624675   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.624790   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.640197   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.640219   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.640266   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.654071   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.654097   11447 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:04:08.654106   11447 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:04:08.654124   11447 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:04:08.654177   11447 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:08.717698   11447 cri.go:76] found id: ""
	I0813 21:04:08.717795   11447 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:04:08.753323   11447 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:04:08.778307   11447 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:04:08.778369   11447 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:08.800125   11447 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:08.800151   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:09.316586   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:10.438674   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.122049553s)
	I0813 21:04:10.438715   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:07.759123   11600 image.go:171] found k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 locally: &{Image:0xc000d620e0}
	I0813 21:04:07.759237   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:09.111081   11600 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 locally: &{Image:0xc00142e040}
	I0813 21:04:09.111212   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:09.462306   11600 image.go:171] found k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 locally: &{Image:0xc00142e140}
	I0813 21:04:09.462414   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:10.255823   11600 image.go:171] found k8s.gcr.io/etcd:3.4.13-3 locally: &{Image:0xc0012f0120}
	I0813 21:04:10.255916   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.13-3
	I0813 21:04:11.315708   11600 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc000d62460}
	I0813 21:04:11.315815   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0813 21:04:10.514963   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:12.516333   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:11.670913   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:14.171134   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:10.800884   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:10.992029   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:11.167449   11447 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:04:11.167518   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:11.684011   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:12.184677   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:12.684502   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:13.184162   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:13.684035   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:14.183991   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:14.683969   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:15.184603   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:15.684380   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:13.372670   11600 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (6.201329225s)
	I0813 21:04:13.372706   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0: (6.255368199s)
	I0813 21:04:13.372718   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0813 21:04:13.372732   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 from cache
	I0813 21:04:13.372728   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.22.0-rc.0: (5.613461548s)
	I0813 21:04:13.372758   11600 crio.go:191] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0813 21:04:13.372783   11600 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" does not exist at hash "7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75" in container runtime
	I0813 21:04:13.372830   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.22.0-rc.0: (3.910399102s)
	I0813 21:04:13.372858   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0813 21:04:13.372868   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.13-3: (3.116939311s)
	I0813 21:04:13.372873   11600 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" does not exist at hash "b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a" in container runtime
	I0813 21:04:13.372900   11600 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:13.372831   11600 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:13.372924   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0: (2.057095132s)
	I0813 21:04:13.372931   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:13.372936   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:13.372786   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0: (4.261556732s)
	I0813 21:04:13.373009   11600 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" does not exist at hash "cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c" in container runtime
	I0813 21:04:13.373032   11600 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:13.373056   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:13.381245   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:13.381490   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:15.288527   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.915644282s)
	I0813 21:04:15.288559   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0813 21:04:15.288601   11600 ssh_runner.go:189] Completed: which crictl: (1.91552977s)
	I0813 21:04:15.288660   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:15.288670   11600 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.22.0-rc.0: (1.907403335s)
	I0813 21:04:15.288709   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:15.288741   11600 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.22.0-rc.0: (1.90722818s)
	I0813 21:04:15.288782   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.288805   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:15.288858   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.323185   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:15.323264   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0 (exists)
	I0813 21:04:15.323283   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.323302   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:15.323314   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0 (exists)
	I0813 21:04:15.323320   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.329111   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0 (exists)
	I0813 21:04:15.011212   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:17.011691   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:16.670490   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:19.170343   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:16.184356   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:16.684936   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:17.184954   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:17.684681   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:18.184911   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:18.684242   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:19.184095   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:19.683984   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:20.184175   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:20.210489   11447 api_server.go:70] duration metric: took 9.043039811s to wait for apiserver process to appear ...
	I0813 21:04:20.210519   11447 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:04:20.210533   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:20.211291   11447 api_server.go:255] stopped: https://192.168.50.136:8444/healthz: Get "https://192.168.50.136:8444/healthz": dial tcp 192.168.50.136:8444: connect: connection refused
	I0813 21:04:20.711989   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:21.745565   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0: (6.422201905s)
	I0813 21:04:21.745599   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 from cache
	I0813 21:04:21.745635   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:21.745691   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:19.017281   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:21.514778   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:23.515219   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:21.171057   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:23.670243   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:25.713040   11447 api_server.go:255] stopped: https://192.168.50.136:8444/healthz: Get "https://192.168.50.136:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:04:24.199550   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0: (2.45382894s)
	I0813 21:04:24.199592   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 from cache
	I0813 21:04:24.199629   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:24.199702   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:26.212134   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:26.605510   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:04:26.605545   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:04:26.711743   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:27.047887   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:04:27.047925   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:04:27.212219   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:27.218272   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:04:27.218303   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:04:27.711515   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:27.725621   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:04:27.725665   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:04:28.212046   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:28.224546   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 200:
	ok
	I0813 21:04:28.234553   11447 api_server.go:139] control plane version: v1.21.3
	I0813 21:04:28.234579   11447 api_server.go:129] duration metric: took 8.024053155s to wait for apiserver health ...
	I0813 21:04:28.234595   11447 cni.go:93] Creating CNI manager for ""
	I0813 21:04:28.234616   11447 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:04:26.019080   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:28.516769   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:25.670866   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:27.671923   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:30.171118   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:28.236904   11447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:04:28.236969   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:04:28.252383   11447 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:04:28.300743   11447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:04:28.320179   11447 system_pods.go:59] 8 kube-system pods found
	I0813 21:04:28.320225   11447 system_pods.go:61] "coredns-558bd4d5db-v2sv5" [3b82b811-5e28-41dc-b0e1-71233efc654e] Running
	I0813 21:04:28.320234   11447 system_pods.go:61] "etcd-default-k8s-different-port-20210813210102-30853" [89cff97c-ff5c-4920-a05f-1ec7b313043b] Running
	I0813 21:04:28.320241   11447 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813210102-30853" [734380ac-398d-4b51-a67f-aaac2457110c] Running
	I0813 21:04:28.320252   11447 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813210102-30853" [ebc5d291-624f-4c49-b9cb-436204a7665a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 21:04:28.320261   11447 system_pods.go:61] "kube-proxy-99cxm" [a1bfba1d-d9fb-4d24-abe9-fd0522c591f0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 21:04:28.320271   11447 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813210102-30853" [b66e01ad-943e-4a2c-aabe-d18f92fd5eb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0813 21:04:28.320290   11447 system_pods.go:61] "metrics-server-7c784ccb57-xfj59" [b522ac66-040a-4030-a817-c422c703b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:04:28.320308   11447 system_pods.go:61] "storage-provisioner" [d59ea453-ed7b-4952-bd61-7993245a1986] Running
	I0813 21:04:28.320315   11447 system_pods.go:74] duration metric: took 19.546937ms to wait for pod list to return data ...
	I0813 21:04:28.320330   11447 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:04:28.329682   11447 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:04:28.329749   11447 node_conditions.go:123] node cpu capacity is 2
	I0813 21:04:28.329769   11447 node_conditions.go:105] duration metric: took 9.429948ms to run NodePressure ...
	I0813 21:04:28.329793   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:29.546168   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.216348804s)
	I0813 21:04:29.546210   11447 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:04:29.563341   11447 kubeadm.go:746] kubelet initialised
	I0813 21:04:29.563369   11447 kubeadm.go:747] duration metric: took 17.148102ms waiting for restarted kubelet to initialise ...
	I0813 21:04:29.563380   11447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:04:29.573196   11447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace to be "Ready" ...
	I0813 21:04:29.338170   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0: (5.138437758s)
	I0813 21:04:29.338201   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 from cache
	I0813 21:04:29.338230   11600 cache_images.go:113] Successfully loaded all cached images
	I0813 21:04:29.338242   11600 cache_images.go:82] LoadImages completed in 23.300623842s
	I0813 21:04:29.338374   11600 ssh_runner.go:149] Run: crio config
	I0813 21:04:29.638116   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:04:29.638137   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:04:29.638149   11600 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:04:29.638162   11600 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.107 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210813205915-30853 NodeName:no-preload-20210813205915-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.105.107 CgroupDriver:systemd Cl
ientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:04:29.638336   11600 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "no-preload-20210813205915-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:04:29.638444   11600 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=no-preload-20210813205915-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.105.107 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:04:29.638511   11600 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 21:04:29.651119   11600 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:04:29.651199   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:04:29.658178   11600 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (518 bytes)
	I0813 21:04:29.674188   11600 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 21:04:29.689809   11600 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2086 bytes)
	I0813 21:04:29.704568   11600 ssh_runner.go:149] Run: grep 192.168.105.107	control-plane.minikube.internal$ /etc/hosts
	I0813 21:04:29.709516   11600 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:29.722084   11600 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853 for IP: 192.168.105.107
	I0813 21:04:29.722165   11600 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:04:29.722197   11600 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:04:29.722281   11600 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.key
	I0813 21:04:29.722312   11600 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/apiserver.key.209a1939
	I0813 21:04:29.722343   11600 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/proxy-client.key
	I0813 21:04:29.722473   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 21:04:29.722561   11600 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 21:04:29.722580   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:04:29.722661   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:04:29.722712   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:04:29.722757   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 21:04:29.722866   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:04:29.724368   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:04:29.746769   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:04:29.768192   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:04:29.786871   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 21:04:29.806532   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:04:29.825599   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 21:04:29.847494   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:04:29.870257   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 21:04:29.892328   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:04:29.912923   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 21:04:29.931703   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 21:04:29.951536   11600 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:04:29.968398   11600 ssh_runner.go:149] Run: openssl version
	I0813 21:04:29.976170   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:04:29.984473   11600 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:29.989429   11600 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:29.989476   11600 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:29.995576   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:04:30.003420   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 21:04:30.011665   11600 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 21:04:30.017989   11600 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 21:04:30.018036   11600 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 21:04:30.025928   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 21:04:30.036305   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 21:04:30.046763   11600 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 21:04:30.052505   11600 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 21:04:30.052558   11600 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 21:04:30.059983   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:04:30.068353   11600 kubeadm.go:390] StartCluster: {Name:no-preload-20210813205915-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.
0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.107 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:04:30.068511   11600 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 21:04:30.068563   11600 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:30.103079   11600 cri.go:76] found id: ""
	I0813 21:04:30.103167   11600 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:04:30.112165   11600 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:04:30.112188   11600 kubeadm.go:600] restartCluster start
	I0813 21:04:30.112242   11600 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:04:30.120196   11600 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.121712   11600 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210813205915-30853" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:04:30.122350   11600 kubeconfig.go:128] "no-preload-20210813205915-30853" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:04:30.123522   11600 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:04:30.127714   11600 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:04:30.134966   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.135011   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.144537   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.344893   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.345009   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.354676   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.544891   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.544966   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.554560   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.744600   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.744692   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.756935   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.945184   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.945265   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.955263   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.145650   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.145758   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.157682   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.344971   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.345039   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.354648   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.544933   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.545001   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.554862   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.745107   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.745178   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.756702   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.945036   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.945134   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.956052   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.145356   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.145486   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.154892   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.013514   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:33.515372   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:32.667378   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:34.671027   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:31.606937   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:33.614157   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:32.344907   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.344989   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.354828   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.545178   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.545268   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.554771   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.745015   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.745132   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.754451   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.945134   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.945223   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.958046   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:33.145379   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:33.145471   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:33.156311   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:33.156338   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:33.156387   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:33.166450   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:33.166479   11600 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:04:33.166489   11600 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:04:33.166504   11600 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:04:33.166556   11600 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:33.201224   11600 cri.go:76] found id: ""
	I0813 21:04:33.201320   11600 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:04:33.218274   11600 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:04:33.226895   11600 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:04:33.226953   11600 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:33.233603   11600 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:33.233633   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:33.409004   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.227200   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.522150   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.670047   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.781290   11600 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:04:34.781393   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:35.294318   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:35.794319   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:36.294093   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:36.794810   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:35.517996   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:38.013307   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:37.169398   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:39.667640   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:36.109861   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:38.110944   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:40.608444   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:37.294229   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:37.794174   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:38.294380   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:38.795081   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:39.295011   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:39.794912   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:40.294691   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:40.794676   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:41.294339   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:41.794517   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:40.514739   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:42.515815   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:41.674615   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:44.171008   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:43.111611   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:45.608557   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:42.294762   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:42.794735   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:43.294817   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:43.794556   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:43.818714   11600 api_server.go:70] duration metric: took 9.037423183s to wait for apiserver process to appear ...
	I0813 21:04:43.818749   11600 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:04:43.818763   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:43.819314   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": dial tcp 192.168.105.107:8443: connect: connection refused
	I0813 21:04:44.319959   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:45.012244   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:47.016481   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:46.672075   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:49.172907   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:47.615450   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:50.112038   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:49.320842   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:04:49.820028   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:49.514363   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:52.012464   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:51.669686   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:53.793699   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:52.607875   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:54.608704   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:54.821107   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:04:55.319665   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:54.013451   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:56.512870   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:58.517483   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:56.168752   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:58.169636   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:57.108818   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:59.110668   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:00.319940   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:05:00.819508   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:01.018546   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:03.515645   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:00.668977   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:02.670402   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:05.170956   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:01.618304   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:04.109034   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:05.157882   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:05:05.158001   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:05:05.320212   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:05.504416   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:05:05.504471   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:05:05.819967   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:05.864291   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:05:05.864338   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:05:06.319440   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:06.332338   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:05:06.332364   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:05:06.820046   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:06.827164   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 200:
	ok
	I0813 21:05:06.836155   11600 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:05:06.836176   11600 api_server.go:129] duration metric: took 23.017420085s to wait for apiserver health ...
	I0813 21:05:06.836188   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:05:06.836198   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:05:06.838586   11600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:05:06.838684   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:05:06.847037   11600 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:05:06.865264   11600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:05:06.893537   11600 system_pods.go:59] 8 kube-system pods found
	I0813 21:05:06.893572   11600 system_pods.go:61] "coredns-78fcd69978-wqktx" [84e2ed0e-2c5a-4dcc-a8de-2cee9f92d267] Running
	I0813 21:05:06.893578   11600 system_pods.go:61] "etcd-no-preload-20210813205915-30853" [de55bcf6-20c8-4b4a-81e0-b181cca0e618] Running
	I0813 21:05:06.893582   11600 system_pods.go:61] "kube-apiserver-no-preload-20210813205915-30853" [53002765-155d-4f17-b484-2fe4e088255d] Running
	I0813 21:05:06.893587   11600 system_pods.go:61] "kube-controller-manager-no-preload-20210813205915-30853" [6052be3c-51df-4a5c-b8a1-6a5a64b4d241] Running
	I0813 21:05:06.893594   11600 system_pods.go:61] "kube-proxy-vvkkd" [c6eef664-f71d-4d0f-aec7-8942b5977520] Running
	I0813 21:05:06.893599   11600 system_pods.go:61] "kube-scheduler-no-preload-20210813205915-30853" [24d521ca-7b13-4b06-805d-7b568471cffb] Running
	I0813 21:05:06.893615   11600 system_pods.go:61] "metrics-server-7c784ccb57-rfp5v" [8c3b111e-0b1d-4a36-85ab-49fe495a538e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:05:06.893629   11600 system_pods.go:61] "storage-provisioner" [dfb23af4-15d2-420e-8720-c4fee1cf94f8] Running
	I0813 21:05:06.893637   11600 system_pods.go:74] duration metric: took 28.354614ms to wait for pod list to return data ...
	I0813 21:05:06.893648   11600 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:05:06.916270   11600 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:05:06.916300   11600 node_conditions.go:123] node cpu capacity is 2
	I0813 21:05:06.916316   11600 node_conditions.go:105] duration metric: took 22.662818ms to run NodePressure ...
	I0813 21:05:06.916337   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:05.516343   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.517331   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.670058   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:09.675888   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:06.111044   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.608567   11447 pod_ready.go:92] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.608606   11447 pod_ready.go:81] duration metric: took 38.035378096s waiting for pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.608620   11447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.615404   11447 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.615428   11447 pod_ready.go:81] duration metric: took 6.797829ms waiting for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.615442   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.630269   11447 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.630291   11447 pod_ready.go:81] duration metric: took 14.84004ms waiting for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.630301   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.637173   11447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.637191   11447 pod_ready.go:81] duration metric: took 6.881994ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.637205   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-99cxm" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.641787   11447 pod_ready.go:92] pod "kube-proxy-99cxm" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.641806   11447 pod_ready.go:81] duration metric: took 4.592412ms waiting for pod "kube-proxy-99cxm" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.641816   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:08.006732   11447 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:08.006761   11447 pod_ready.go:81] duration metric: took 364.934714ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:08.006777   11447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:10.416206   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.404648   11600 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:05:07.414912   11600 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0813 21:05:07.708787   11600 retry.go:31] will retry after 540.190908ms: kubelet not initialised
	I0813 21:05:08.256390   11600 kubeadm.go:746] kubelet initialised
	I0813 21:05:08.256419   11600 kubeadm.go:747] duration metric: took 851.738381ms waiting for restarted kubelet to initialise ...
	I0813 21:05:08.256432   11600 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:05:08.265413   11600 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-wqktx" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:10.372610   11600 pod_ready.go:102] pod "coredns-78fcd69978-wqktx" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:10.016406   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.513411   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.171097   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:14.667560   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.416520   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:14.917152   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.791126   11600 pod_ready.go:102] pod "coredns-78fcd69978-wqktx" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:15.296951   11600 pod_ready.go:92] pod "coredns-78fcd69978-wqktx" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:15.296981   11600 pod_ready.go:81] duration metric: took 7.031537534s waiting for pod "coredns-78fcd69978-wqktx" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:15.296992   11600 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:14.513966   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:16.518250   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:16.669467   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:18.670323   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:16.956540   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:19.413311   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:17.316436   11600 pod_ready.go:102] pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:17.817195   11600 pod_ready.go:92] pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:17.817242   11600 pod_ready.go:81] duration metric: took 2.520242337s waiting for pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:17.817255   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:17.825965   11600 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:17.825988   11600 pod_ready.go:81] duration metric: took 8.722511ms waiting for pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:17.826001   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:19.873713   11600 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:19.011904   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:21.016678   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:23.516661   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:21.171346   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:23.667746   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:21.422135   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:23.915750   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:22.369972   11600 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:22.370008   11600 pod_ready.go:81] duration metric: took 4.543995238s waiting for pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.370023   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vvkkd" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.377665   11600 pod_ready.go:92] pod "kube-proxy-vvkkd" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:22.377685   11600 pod_ready.go:81] duration metric: took 7.65301ms waiting for pod "kube-proxy-vvkkd" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.377696   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.385096   11600 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:22.385113   11600 pod_ready.go:81] duration metric: took 7.408599ms waiting for pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.385121   11600 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:24.402382   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:26.901061   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:26.018949   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.513145   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:25.668326   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.186367   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:26.415525   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.913863   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.902947   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.903048   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.516874   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:33.011959   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.666530   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:32.666799   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:34.668707   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.915376   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:33.415440   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:35.415962   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:33.403872   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:35.902644   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:35.014820   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:37.015893   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:37.169496   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:39.170551   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:37.918334   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:40.414297   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:38.408969   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:40.903397   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:39.017723   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:41.512620   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:43.513209   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:41.171007   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:43.668192   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:42.915720   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.423660   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:43.403450   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.445034   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.515122   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:48.013001   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.669651   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:48.167953   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:50.171552   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:47.916795   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:49.916975   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:47.904497   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:50.399990   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:50.512153   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.512918   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.174821   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.670257   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.414652   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.415677   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.404181   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.904430   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.515153   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:57.013806   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:57.168792   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:59.666912   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:56.416201   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:58.917986   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:57.401016   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:59.404016   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.906289   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:59.512815   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.514140   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.668491   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:03.668678   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.413828   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:03.414479   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:04.403957   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:06.901856   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:04.012166   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:06.013309   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.512931   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:06.168995   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.667450   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:05.918408   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.416404   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:10.416808   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.903609   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:11.405857   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:11.014642   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:13.512706   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:10.669910   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:13.170072   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:12.919893   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:15.417469   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:13.901800   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:16.402802   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:15.514827   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:18.012928   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:15.668033   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:17.668913   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.167322   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:17.914829   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.413984   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:18.405532   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.902412   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.017907   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.514292   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.170177   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:24.668943   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.416213   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:24.922905   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.902968   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:25.401882   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:25.067645   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.519637   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.167658   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:29.168133   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.413791   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:29.414145   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.402765   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:29.403392   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:31.900702   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:30.012069   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:32.014177   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:31.169296   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.160326   10272 pod_ready.go:81] duration metric: took 4m0.399801158s waiting for pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace to be "Ready" ...
	E0813 21:06:33.160356   10272 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:06:33.160383   10272 pod_ready.go:38] duration metric: took 4m1.6003819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:06:33.160416   10272 kubeadm.go:604] restartCluster took 4m59.137608004s
	W0813 21:06:33.160600   10272 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:06:33.160640   10272 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:06:31.419127   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.918800   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.903797   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:36.401884   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:34.015031   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:36.513631   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:36.414485   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:38.415451   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:40.416420   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:38.900640   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:40.901483   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:39.011809   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:41.013908   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:43.513605   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:42.920201   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:45.415258   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:42.905257   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:44.905610   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:45.514466   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:47.515852   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:47.415484   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:49.415708   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:47.414520   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:49.903972   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:49.517251   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:52.012858   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:51.918221   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:53.918831   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:52.402393   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:54.902136   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:54.513409   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:57.012531   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:00.392100   10272 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.231434099s)
	I0813 21:07:00.392193   10272 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:07:00.406886   10272 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:07:00.406959   10272 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:07:00.442137   10272 cri.go:76] found id: ""
	I0813 21:07:00.442208   10272 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:07:00.449499   10272 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:07:00.458330   10272 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:07:00.458372   10272 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0813 21:06:55.923186   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:58.413947   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:00.414960   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:57.401732   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:59.404622   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:01.901431   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:01.146030   10272 out.go:204]   - Generating certificates and keys ...
	I0813 21:06:59.013910   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:01.514845   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:02.514874   10272 out.go:204]   - Booting up control plane ...
	I0813 21:07:02.420421   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:04.921161   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:03.901922   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:06.400821   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:04.017697   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:06.512767   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:07.415160   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:09.916408   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:08.402752   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:10.903350   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:09.011421   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:11.015678   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:13.515855   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:14.594414   10272 out.go:204]   - Configuring RBAC rules ...
	I0813 21:07:15.029321   10272 cni.go:93] Creating CNI manager for ""
	I0813 21:07:15.029346   10272 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:07:15.031000   10272 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:07:15.031061   10272 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:07:15.039108   10272 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:07:15.058649   10272 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:07:15.058707   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:15.058717   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=old-k8s-version-20210813205823-30853 minikube.k8s.io/updated_at=2021_08_13T21_07_15_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:15.095343   10272 ops.go:34] apiserver oom_adj: 16
	I0813 21:07:15.095372   10272 ops.go:39] adjusting apiserver oom_adj to -10
	I0813 21:07:15.095386   10272 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:07:15.330590   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:12.413115   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:14.414512   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:13.400030   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:15.403757   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:15.505147   10867 pod_ready.go:81] duration metric: took 4m0.402080118s waiting for pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace to be "Ready" ...
	E0813 21:07:15.505169   10867 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:07:15.505190   10867 pod_ready.go:38] duration metric: took 4m39.330917946s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:07:15.505243   10867 kubeadm.go:604] restartCluster took 5m2.104930788s
	W0813 21:07:15.505419   10867 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:07:15.505453   10867 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:07:15.931748   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:16.430811   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:16.930834   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:17.430845   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:17.930776   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:18.431732   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:18.930812   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:19.431647   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:19.931099   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:20.431444   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:16.414885   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:18.422404   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:17.901988   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:20.403379   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:20.930893   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:21.430961   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:21.931774   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:22.431310   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:22.931068   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:23.431314   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:23.931570   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:24.431290   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:24.931320   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:25.431531   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:20.914560   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:22.914642   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:24.916586   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:22.902451   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:24.903333   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:25.931646   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:26.431685   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:26.931719   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:27.431409   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:27.930888   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:28.431524   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:28.931535   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:29.431073   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:29.931502   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:30.430962   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:26.919653   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:29.418420   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:30.543916   10272 kubeadm.go:985] duration metric: took 15.48526077s to wait for elevateKubeSystemPrivileges.
	I0813 21:07:30.543949   10272 kubeadm.go:392] StartCluster complete in 5m56.564780701s
	I0813 21:07:30.543981   10272 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:07:30.544141   10272 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:07:30.545813   10272 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:07:31.081760   10272 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210813205823-30853" rescaled to 1
	I0813 21:07:31.081820   10272 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.83.49 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0813 21:07:31.083916   10272 out.go:177] * Verifying Kubernetes components...
	I0813 21:07:31.083983   10272 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:07:31.081886   10272 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:07:31.081888   10272 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:07:31.084080   10272 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084084   10272 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084099   10272 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.082132   10272 config.go:177] Loaded profile config "old-k8s-version-20210813205823-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	W0813 21:07:31.084108   10272 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:07:31.084120   10272 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084134   10272 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084143   10272 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084151   10272 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210813205823-30853"
	W0813 21:07:31.084158   10272 addons.go:147] addon metrics-server should already be in state true
	I0813 21:07:31.084100   10272 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210813205823-30853"
	W0813 21:07:31.084168   10272 addons.go:147] addon dashboard should already be in state true
	I0813 21:07:31.084183   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.084189   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.084158   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.084631   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084632   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084685   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.084687   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.084751   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084792   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.084631   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084865   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.105064   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42647
	I0813 21:07:31.105078   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35401
	I0813 21:07:31.105589   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.105724   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.105733   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0813 21:07:31.105826   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0813 21:07:31.106201   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.106225   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.106288   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.106388   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.106410   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.106656   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.106795   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.106823   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.106845   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.106940   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.107274   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.107310   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.107372   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.107393   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.107505   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.107679   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.107914   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.108023   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.108066   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.108456   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.108502   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.121147   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0813 21:07:31.120919   10272 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210813205823-30853"
	W0813 21:07:31.121411   10272 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:07:31.121457   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.121491   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45327
	I0813 21:07:31.121993   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.122297   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.122764   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.123195   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.123739   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.123763   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.123790   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.123822   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.124154   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.124287   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.124315   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.124496   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.128429   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:31.130930   10272 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:07:31.129602   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:31.130875   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45195
	I0813 21:07:31.132382   10272 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:07:31.132436   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:07:31.132451   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:07:31.132474   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.134119   10272 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:07:31.134224   10272 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:07:31.134241   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:07:31.134259   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.132855   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.135094   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.135114   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.135252   10272 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210813205823-30853" to be "Ready" ...
	I0813 21:07:31.135886   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.136518   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.140126   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.140398   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:27.404366   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:29.901079   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:31.902091   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:31.142209   10272 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:07:31.142270   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:07:31.140792   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.142282   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:07:31.140956   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.142313   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.142015   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.142337   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.142480   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.142494   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.142517   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.142738   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.142977   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.143006   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.143155   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.143333   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.143530   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.143544   10272 node_ready.go:49] node "old-k8s-version-20210813205823-30853" has status "Ready":"True"
	I0813 21:07:31.143557   10272 node_ready.go:38] duration metric: took 8.284522ms waiting for node "old-k8s-version-20210813205823-30853" to be "Ready" ...
	I0813 21:07:31.143568   10272 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:07:31.145891   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36815
	I0813 21:07:31.146234   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.146769   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.146792   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.147190   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.147843   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.147892   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.148364   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.148819   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.148848   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.148994   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.149157   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.149288   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.149464   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.154492   10272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:31.159199   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0813 21:07:31.159608   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.160083   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.160107   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.160442   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.160628   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.163581   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:31.163764   10272 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:07:31.163780   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:07:31.163796   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.169112   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.169507   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.169535   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.169656   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.169820   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.170004   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.170153   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.334616   10272 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:07:31.339091   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:07:31.350144   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:07:31.350160   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:07:31.366866   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:07:31.366889   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:07:31.415434   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:07:31.415460   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:07:31.415813   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:07:31.439763   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:07:31.439787   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:07:31.551531   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:07:31.551559   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:07:31.614721   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:07:31.614757   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:07:31.648730   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:07:31.686266   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:07:31.686288   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:07:31.766323   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:07:31.766354   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:07:32.021208   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:07:32.021232   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:07:32.128868   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:07:32.128914   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:07:32.396755   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:07:32.396784   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:07:32.629623   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:07:32.629647   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:07:32.876963   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:07:33.170819   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.554610   10272 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.219955078s)
	I0813 21:07:33.554661   10272 start.go:728] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS
	I0813 21:07:33.554710   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.215586915s)
	I0813 21:07:33.554766   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.138920482s)
	I0813 21:07:33.554845   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.554810   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.554909   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.554882   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.555205   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.555224   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.555237   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.555251   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.555322   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.555339   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.555337   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.555352   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.555362   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.557880   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.557881   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.557894   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.557900   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.557931   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.557951   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.557969   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.558002   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.558255   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.558287   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.558297   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.417993   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.769219397s)
	I0813 21:07:34.418041   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.418055   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.419702   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.419703   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:34.419721   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.419735   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.419744   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.420013   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.420030   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.420042   10272 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210813205823-30853"
	I0813 21:07:34.719323   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.842300346s)
	I0813 21:07:34.719378   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.719393   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.719692   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.719710   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.719720   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.719731   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.721171   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.721190   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.721177   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:34.723692   10272 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 21:07:34.723719   10272 addons.go:344] enableAddons completed in 3.64184317s
	I0813 21:07:31.421963   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.916790   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.903029   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:36.402184   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:35.688121   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.171925   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:36.422423   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.916463   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.403153   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.903100   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.668346   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:42.668696   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:44.669555   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.922382   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:42.982831   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:45.413525   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:43.402566   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:45.905536   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:46.733235   10867 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.227754709s)
	I0813 21:07:46.733320   10867 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:07:46.749380   10867 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:07:46.749451   10867 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:07:46.789090   10867 cri.go:76] found id: ""
	I0813 21:07:46.789192   10867 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:07:46.797753   10867 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:07:46.805773   10867 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:07:46.805816   10867 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:07:47.366092   10867 out.go:204]   - Generating certificates and keys ...
	I0813 21:07:48.287070   10867 out.go:204]   - Booting up control plane ...
	I0813 21:07:46.669635   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:49.169303   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:47.414190   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:49.914581   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:48.403863   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:50.902452   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:51.170024   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:53.672034   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:52.419570   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:54.922828   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:53.400843   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:55.401813   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:56.169442   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:58.173990   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:00.180299   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:57.414460   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:59.414953   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:57.402188   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:59.407382   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:01.902586   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:02.672361   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:05.168918   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:04.917732   10867 out.go:204]   - Configuring RBAC rules ...
	I0813 21:08:05.478215   10867 cni.go:93] Creating CNI manager for ""
	I0813 21:08:05.478240   10867 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:08:01.415978   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:03.916377   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:03.903277   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:05.908821   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:05.480079   10867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:08:05.480166   10867 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:08:05.490836   10867 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:08:05.516775   10867 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:08:05.516826   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=embed-certs-20210813205917-30853 minikube.k8s.io/updated_at=2021_08_13T21_08_05_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:05.516826   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:05.571274   10867 ops.go:34] apiserver oom_adj: -16
	I0813 21:08:05.877007   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:06.498456   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:06.997686   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:07.498266   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:07.998377   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:08.498124   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:07.171495   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.171976   10272 pod_ready.go:92] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:08.172005   10272 pod_ready.go:81] duration metric: took 37.017483324s waiting for pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:08.172023   10272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xnqfc" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:08.178546   10272 pod_ready.go:92] pod "kube-proxy-xnqfc" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:08.178572   10272 pod_ready.go:81] duration metric: took 6.540181ms waiting for pod "kube-proxy-xnqfc" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:08.178582   10272 pod_ready.go:38] duration metric: took 37.035002251s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:08:08.178607   10272 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:08:08.178659   10272 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:08:08.193211   10272 api_server.go:70] duration metric: took 37.111356956s to wait for apiserver process to appear ...
	I0813 21:08:08.193234   10272 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:08:08.193245   10272 api_server.go:239] Checking apiserver healthz at https://192.168.83.49:8443/healthz ...
	I0813 21:08:08.200770   10272 api_server.go:265] https://192.168.83.49:8443/healthz returned 200:
	ok
	I0813 21:08:08.201945   10272 api_server.go:139] control plane version: v1.14.0
	I0813 21:08:08.201960   10272 api_server.go:129] duration metric: took 8.721341ms to wait for apiserver health ...
	I0813 21:08:08.201968   10272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:08:08.206023   10272 system_pods.go:59] 4 kube-system pods found
	I0813 21:08:08.206043   10272 system_pods.go:61] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.206047   10272 system_pods.go:61] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.206054   10272 system_pods.go:61] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.206058   10272 system_pods.go:61] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.206065   10272 system_pods.go:74] duration metric: took 4.091873ms to wait for pod list to return data ...
	I0813 21:08:08.206072   10272 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:08:08.209997   10272 default_sa.go:45] found service account: "default"
	I0813 21:08:08.210015   10272 default_sa.go:55] duration metric: took 3.938001ms for default service account to be created ...
	I0813 21:08:08.210022   10272 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:08:08.214317   10272 system_pods.go:86] 4 kube-system pods found
	I0813 21:08:08.214336   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.214341   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.214348   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.214354   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.214373   10272 retry.go:31] will retry after 214.282984ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:08.433733   10272 system_pods.go:86] 4 kube-system pods found
	I0813 21:08:08.433762   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.433770   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.433781   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.433788   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.433807   10272 retry.go:31] will retry after 293.852686ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:08.735301   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:08.735333   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.735341   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.735350   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:08.735360   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.735366   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.735412   10272 retry.go:31] will retry after 355.089487ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:09.097711   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:09.097745   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.097753   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.097758   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:09.097765   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:09.097770   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.097788   10272 retry.go:31] will retry after 480.685997ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:09.584281   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:09.584311   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.584317   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.584321   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:09.584329   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:09.584333   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.584352   10272 retry.go:31] will retry after 544.138839ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:10.134667   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:10.134694   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.134701   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.134706   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:10.134712   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:10.134716   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.134738   10272 retry.go:31] will retry after 684.014074ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:05.922361   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.419726   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.401315   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:10.909126   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.998041   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:09.498515   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:09.998297   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:10.498018   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:10.997716   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:11.497679   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:11.998238   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:12.498701   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:12.997887   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:13.498358   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:10.825951   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:10.825981   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.825987   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:10.825991   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.825995   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:10.826001   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:10.826006   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.826027   10272 retry.go:31] will retry after 1.039068543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:11.871229   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:11.871263   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:11.871270   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:11.871274   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:11.871279   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:11.871292   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:11.871300   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:11.871321   10272 retry.go:31] will retry after 1.02433744s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:12.905014   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:12.905044   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905052   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905058   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905065   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905075   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:12.905081   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905105   10272 retry.go:31] will retry after 1.268973106s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:14.179146   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:14.179173   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179179   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179183   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179188   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179195   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:14.179202   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179223   10272 retry.go:31] will retry after 1.733071555s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:10.914496   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:12.924919   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:15.415784   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:13.401246   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:15.408120   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:13.997632   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:14.497943   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:14.998249   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:15.498543   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:15.998283   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:16.497729   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:16.997873   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:17.497972   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:17.997958   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:18.497761   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:18.997883   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:19.220539   10867 kubeadm.go:985] duration metric: took 13.703767036s to wait for elevateKubeSystemPrivileges.
	I0813 21:08:19.220607   10867 kubeadm.go:392] StartCluster complete in 6m5.865041156s
	I0813 21:08:19.220635   10867 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:08:19.220787   10867 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:08:19.223909   10867 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:08:19.752954   10867 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210813205917-30853" rescaled to 1
	I0813 21:08:19.753018   10867 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:08:19.754708   10867 out.go:177] * Verifying Kubernetes components...
	I0813 21:08:19.754778   10867 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:08:19.753082   10867 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:08:19.753107   10867 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:08:19.753299   10867 config.go:177] Loaded profile config "embed-certs-20210813205917-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:08:19.754891   10867 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754904   10867 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754933   10867 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210813205917-30853"
	I0813 21:08:19.754932   10867 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754940   10867 addons.go:59] Setting dashboard=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754970   10867 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210813205917-30853"
	I0813 21:08:19.754974   10867 addons.go:135] Setting addon dashboard=true in "embed-certs-20210813205917-30853"
	W0813 21:08:19.754988   10867 addons.go:147] addon dashboard should already be in state true
	W0813 21:08:19.754987   10867 addons.go:147] addon metrics-server should already be in state true
	I0813 21:08:19.755026   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.754914   10867 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210813205917-30853"
	W0813 21:08:19.755116   10867 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:08:19.755134   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.755026   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.755462   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755511   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.755539   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755462   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755571   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.755606   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.755637   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755686   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.770580   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42725
	I0813 21:08:19.771121   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.771377   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33335
	I0813 21:08:19.771830   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.771853   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.771954   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.772247   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.772723   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.772739   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.772901   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0813 21:08:19.773026   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.773068   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.773413   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.773902   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.773924   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.774397   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.774463   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.774563   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.775023   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.775063   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.784550   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33743
	I0813 21:08:19.784959   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.785506   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.785522   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.785894   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.786493   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.786525   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.787205   10867 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210813205917-30853"
	W0813 21:08:19.787228   10867 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:08:19.787259   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.787583   10867 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210813205917-30853" to be "Ready" ...
	I0813 21:08:19.787674   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.787718   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.787787   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0813 21:08:19.787910   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0813 21:08:19.788204   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.789084   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.789106   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.789211   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.789825   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.789931   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.789953   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.790005   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.790276   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.790437   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.794978   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.794986   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.797284   10867 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:08:19.798757   10867 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:08:19.797345   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:08:19.798798   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:08:19.798822   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.800334   10867 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:08:19.800389   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:08:19.800399   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:08:19.799838   10867 node_ready.go:49] node "embed-certs-20210813205917-30853" has status "Ready":"True"
	I0813 21:08:19.800420   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.800422   10867 node_ready.go:38] duration metric: took 12.815275ms waiting for node "embed-certs-20210813205917-30853" to be "Ready" ...
	I0813 21:08:19.800442   10867 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:08:19.802028   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35565
	I0813 21:08:19.802460   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.802983   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.803025   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.803483   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.803731   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.809104   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.809531   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.809654   10867 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:15.917751   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:15.917783   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917792   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917799   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917805   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917816   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:15.917823   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917844   10272 retry.go:31] will retry after 2.410580953s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:18.337846   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:18.337883   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337892   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337898   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337905   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337916   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:18.337923   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337944   10272 retry.go:31] will retry after 3.437877504s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:17.916739   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:20.415225   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:17.901469   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:19.902763   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:21.903648   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:19.811430   10867 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:08:19.810007   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.811541   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.811578   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.811581   10867 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:08:19.810168   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.810293   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.810559   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.811047   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36555
	I0813 21:08:19.811649   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.811674   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:08:19.811689   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.811908   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.811910   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.812038   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.812038   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.812443   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.812464   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:19.812475   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:19.813065   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.813083   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.813470   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.814035   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.814070   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.818289   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.818751   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.818811   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.818838   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.818903   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.819054   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.819209   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:19.825837   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0813 21:08:19.826199   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.826605   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.826624   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.826952   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.827127   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.830318   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.830538   10867 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:08:19.830553   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:08:19.830570   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.835761   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.836143   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.836172   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.836286   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.836451   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.836602   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.836724   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:20.037292   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:08:20.037321   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:08:20.099263   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:08:20.099292   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:08:20.117736   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:08:20.146467   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:08:20.146494   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:08:20.148636   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:08:20.180430   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:08:20.180464   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:08:20.300161   10867 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:08:20.301107   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:08:20.301131   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:08:20.311540   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:08:20.311565   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:08:20.390587   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:08:20.390623   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:08:20.411556   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:08:20.513347   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:08:20.513381   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:08:20.562665   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:08:20.562692   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:08:20.637151   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:08:20.637186   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:08:20.697238   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:08:20.697266   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:08:20.722593   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:08:20.722622   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:08:20.888939   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:08:21.832691   10867 pod_ready.go:102] pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:22.499631   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.381850453s)
	I0813 21:08:22.499694   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.499708   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.499992   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.500011   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.500021   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.500031   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.500251   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.500299   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.500317   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.500327   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.500578   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.500587   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | Closing plugin on server side
	I0813 21:08:22.500601   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.607350   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.458674806s)
	I0813 21:08:22.607409   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.607423   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.607684   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.607702   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.607713   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.607728   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.607970   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.607987   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.671948   10867 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.371722218s)
	I0813 21:08:22.671991   10867 start.go:728] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 21:08:23.212733   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.801121223s)
	I0813 21:08:23.212785   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:23.212801   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:23.213078   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | Closing plugin on server side
	I0813 21:08:23.213122   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:23.213131   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:23.213147   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:23.213162   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:23.213417   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | Closing plugin on server side
	I0813 21:08:23.213454   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:23.213463   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:23.213476   10867 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210813205917-30853"
	I0813 21:08:23.973313   10867 pod_ready.go:102] pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.127694   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.238655669s)
	I0813 21:08:24.127768   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:24.127783   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:24.128088   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:24.128134   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:24.128152   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:24.128162   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:24.128402   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:24.128416   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:21.783186   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:21.783216   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783222   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783226   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783231   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783238   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:21.783242   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783260   10272 retry.go:31] will retry after 3.261655801s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:25.051995   10272 system_pods.go:86] 7 kube-system pods found
	I0813 21:08:25.052028   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052037   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052051   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:25.052058   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052065   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052076   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:25.052086   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052104   10272 retry.go:31] will retry after 4.086092664s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:22.421981   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.915565   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:23.903699   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:25.903987   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.130282   10867 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 21:08:24.130308   10867 addons.go:344] enableAddons completed in 4.377209962s
	I0813 21:08:26.342246   10867 pod_ready.go:92] pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:26.342272   10867 pod_ready.go:81] duration metric: took 6.532595189s waiting for pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:26.342282   10867 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:28.367486   10867 pod_ready.go:102] pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:29.149965   10272 system_pods.go:86] 7 kube-system pods found
	I0813 21:08:29.149997   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150006   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150013   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:29.150019   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150025   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150035   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:29.150043   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150063   10272 retry.go:31] will retry after 6.402197611s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:26.928284   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:29.416662   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:28.403505   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:30.906239   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:30.367630   10867 pod_ready.go:102] pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:31.386002   10867 pod_ready.go:97] error getting pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-zdlnb" not found
	I0813 21:08:31.386040   10867 pod_ready.go:81] duration metric: took 5.043748322s waiting for pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace to be "Ready" ...
	E0813 21:08:31.386053   10867 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-zdlnb" not found
	I0813 21:08:31.386063   10867 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.395413   10867 pod_ready.go:92] pod "etcd-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.395442   10867 pod_ready.go:81] duration metric: took 9.37037ms waiting for pod "etcd-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.395456   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.407839   10867 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.407860   10867 pod_ready.go:81] duration metric: took 12.39509ms waiting for pod "kube-apiserver-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.407872   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.413811   10867 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.413832   10867 pod_ready.go:81] duration metric: took 5.950273ms waiting for pod "kube-controller-manager-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.413845   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-szvqm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.422794   10867 pod_ready.go:92] pod "kube-proxy-szvqm" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.422819   10867 pod_ready.go:81] duration metric: took 8.966458ms waiting for pod "kube-proxy-szvqm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.422831   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.564060   10867 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.564136   10867 pod_ready.go:81] duration metric: took 141.29321ms waiting for pod "kube-scheduler-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.564168   10867 pod_ready.go:38] duration metric: took 11.763707327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:08:31.564208   10867 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:08:31.564290   10867 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:08:31.578890   10867 api_server.go:70] duration metric: took 11.8258395s to wait for apiserver process to appear ...
	I0813 21:08:31.578919   10867 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:08:31.578932   10867 api_server.go:239] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0813 21:08:31.585647   10867 api_server.go:265] https://192.168.39.156:8443/healthz returned 200:
	ok
	I0813 21:08:31.586833   10867 api_server.go:139] control plane version: v1.21.3
	I0813 21:08:31.586868   10867 api_server.go:129] duration metric: took 7.925906ms to wait for apiserver health ...
	I0813 21:08:31.586879   10867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:08:31.766375   10867 system_pods.go:59] 8 kube-system pods found
	I0813 21:08:31.766406   10867 system_pods.go:61] "coredns-558bd4d5db-8bmrm" [23a5740e-bd96-4bd0-851e-4abc81b7ddff] Running
	I0813 21:08:31.766412   10867 system_pods.go:61] "etcd-embed-certs-20210813205917-30853" [7061779a-83ef-4ed4-9512-ec936a2d94d1] Running
	I0813 21:08:31.766416   10867 system_pods.go:61] "kube-apiserver-embed-certs-20210813205917-30853" [796645fb-0142-415b-96c2-9b640f680514] Running
	I0813 21:08:31.766421   10867 system_pods.go:61] "kube-controller-manager-embed-certs-20210813205917-30853" [d17159ee-4ac6-4f2a-aaad-cd3af7317e02] Running
	I0813 21:08:31.766424   10867 system_pods.go:61] "kube-proxy-szvqm" [d116fa9a-0229-40cf-ae60-5d89fb7716f1] Running
	I0813 21:08:31.766428   10867 system_pods.go:61] "kube-scheduler-embed-certs-20210813205917-30853" [b888e2ad-9504-4e54-8156-8d30bb432d37] Running
	I0813 21:08:31.766436   10867 system_pods.go:61] "metrics-server-7c784ccb57-qc7sb" [43aa1ab2-5284-4d76-b826-12fd50a0ba54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:31.766440   10867 system_pods.go:61] "storage-provisioner" [f70d6e8f-2aca-49ac-913a-73ddf71ae6ee] Running
	I0813 21:08:31.766447   10867 system_pods.go:74] duration metric: took 179.562479ms to wait for pod list to return data ...
	I0813 21:08:31.766456   10867 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:08:31.964873   10867 default_sa.go:45] found service account: "default"
	I0813 21:08:31.964899   10867 default_sa.go:55] duration metric: took 198.43488ms for default service account to be created ...
	I0813 21:08:31.964911   10867 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:08:32.168305   10867 system_pods.go:86] 8 kube-system pods found
	I0813 21:08:32.168349   10867 system_pods.go:89] "coredns-558bd4d5db-8bmrm" [23a5740e-bd96-4bd0-851e-4abc81b7ddff] Running
	I0813 21:08:32.168359   10867 system_pods.go:89] "etcd-embed-certs-20210813205917-30853" [7061779a-83ef-4ed4-9512-ec936a2d94d1] Running
	I0813 21:08:32.168369   10867 system_pods.go:89] "kube-apiserver-embed-certs-20210813205917-30853" [796645fb-0142-415b-96c2-9b640f680514] Running
	I0813 21:08:32.168377   10867 system_pods.go:89] "kube-controller-manager-embed-certs-20210813205917-30853" [d17159ee-4ac6-4f2a-aaad-cd3af7317e02] Running
	I0813 21:08:32.168384   10867 system_pods.go:89] "kube-proxy-szvqm" [d116fa9a-0229-40cf-ae60-5d89fb7716f1] Running
	I0813 21:08:32.168390   10867 system_pods.go:89] "kube-scheduler-embed-certs-20210813205917-30853" [b888e2ad-9504-4e54-8156-8d30bb432d37] Running
	I0813 21:08:32.168402   10867 system_pods.go:89] "metrics-server-7c784ccb57-qc7sb" [43aa1ab2-5284-4d76-b826-12fd50a0ba54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:32.168412   10867 system_pods.go:89] "storage-provisioner" [f70d6e8f-2aca-49ac-913a-73ddf71ae6ee] Running
	I0813 21:08:32.168423   10867 system_pods.go:126] duration metric: took 203.506299ms to wait for k8s-apps to be running ...
	I0813 21:08:32.168436   10867 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:08:32.168487   10867 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:08:32.183556   10867 system_svc.go:56] duration metric: took 15.110742ms WaitForService to wait for kubelet.
	I0813 21:08:32.183585   10867 kubeadm.go:547] duration metric: took 12.430541017s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:08:32.183611   10867 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:08:32.366938   10867 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:08:32.366970   10867 node_conditions.go:123] node cpu capacity is 2
	I0813 21:08:32.366989   10867 node_conditions.go:105] duration metric: took 183.372537ms to run NodePressure ...
	I0813 21:08:32.367004   10867 start.go:231] waiting for startup goroutines ...
	I0813 21:08:32.428402   10867 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 21:08:32.430754   10867 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210813205917-30853" cluster and "default" namespace by default
	I0813 21:08:31.925048   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:34.421689   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:33.402937   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:35.404185   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:35.559235   10272 system_pods.go:86] 7 kube-system pods found
	I0813 21:08:35.559264   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559272   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559278   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559284   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559289   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559299   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:35.559305   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559325   10272 retry.go:31] will retry after 6.062999549s: missing components: kube-controller-manager
	I0813 21:08:36.917628   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:39.412918   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:37.902004   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:40.400508   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:41.627792   10272 system_pods.go:86] 8 kube-system pods found
	I0813 21:08:41.627828   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627837   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627844   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627851   10272 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205823-30853" [9f80b2c3-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:41.627857   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627863   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627874   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:41.627882   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627906   10272 retry.go:31] will retry after 10.504197539s: missing components: kube-controller-manager
	I0813 21:08:41.415467   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:43.418679   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:45.419622   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:42.401588   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:44.413733   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:46.903773   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:47.914837   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:50.413949   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:52.140470   10272 system_pods.go:86] 8 kube-system pods found
	I0813 21:08:52.140498   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140503   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140508   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140512   10272 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205823-30853" [9f80b2c3-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140516   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140520   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140526   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:52.140531   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140549   10272 system_pods.go:126] duration metric: took 43.930520866s to wait for k8s-apps to be running ...
	I0813 21:08:52.140578   10272 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:08:52.140627   10272 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:08:52.153255   10272 system_svc.go:56] duration metric: took 12.668182ms WaitForService to wait for kubelet.
	I0813 21:08:52.153279   10272 kubeadm.go:547] duration metric: took 1m21.071431976s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:08:52.153300   10272 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:08:52.156915   10272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:08:52.156939   10272 node_conditions.go:123] node cpu capacity is 2
	I0813 21:08:52.156953   10272 node_conditions.go:105] duration metric: took 3.648615ms to run NodePressure ...
	I0813 21:08:52.156962   10272 start.go:231] waiting for startup goroutines ...
	I0813 21:08:52.202043   10272 start.go:462] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I0813 21:08:52.204217   10272 out.go:177] 
	W0813 21:08:52.204388   10272 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I0813 21:08:52.206057   10272 out.go:177]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:08:52.207407   10272 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20210813205823-30853" cluster and "default" namespace by default
	I0813 21:08:48.904448   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:51.401687   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:52.414001   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:54.916108   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:53.903280   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:56.402202   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 21:01:48 UTC, end at Fri 2021-08-13 21:08:58 UTC. --
	Aug 13 21:08:57 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:57.208028719Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,StartedAt:1628888905709196729,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/f70d6e8f-2aca-49ac-913a-73ddf71ae6ee/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/f70d6e8f-2aca-49ac-913a-73ddf71ae6ee/containers/storage-provisioner/f735120b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/f70d6e8f-2aca-49ac-913a-73ddf71ae6ee/volumes/kubernetes.io~projected/kube-api-access-25ptq,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_f70d6e8f-2aca-49ac-913a-73ddf71ae6ee/storag
e-provisioner/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=64daeed0-58af-42a5-a4d6-085041f47f6f name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.027792469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=897eef1e-76aa-4aa2-9591-597b86820701 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.028035131Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=897eef1e-76aa-4aa2-9591-597b86820701 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.028352738Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=897eef1e-76aa-4aa2-9591-597b86820701 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.078413162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dbfd3776-4a43-4d04-a7b6-05955547bf08 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.078568119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dbfd3776-4a43-4d04-a7b6-05955547bf08 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.078972175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dbfd3776-4a43-4d04-a7b6-05955547bf08 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.123487997Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7a411b01-9296-48bf-a99c-bbec11f26cc6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.123556548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7a411b01-9296-48bf-a99c-bbec11f26cc6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.123809541Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7a411b01-9296-48bf-a99c-bbec11f26cc6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.163361714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cce216e0-aaa2-4535-8ab3-58f516b65839 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.163461241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cce216e0-aaa2-4535-8ab3-58f516b65839 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.163724901Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cce216e0-aaa2-4535-8ab3-58f516b65839 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.205081044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=10cb4cdb-b9e0-4510-b674-826185abb069 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.205257768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=10cb4cdb-b9e0-4510-b674-826185abb069 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.205483956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=10cb4cdb-b9e0-4510-b674-826185abb069 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.248142418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=163007de-94b0-4cc2-a7e9-380a1b3e4b11 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.248288440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=163007de-94b0-4cc2-a7e9-380a1b3e4b11 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.248506379Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=163007de-94b0-4cc2-a7e9-380a1b3e4b11 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.292504236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0e54a3b1-6311-4e67-aac9-ee4a91761114 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.292650337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0e54a3b1-6311-4e67-aac9-ee4a91761114 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.293021278Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0e54a3b1-6311-4e67-aac9-ee4a91761114 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.334297813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=59b39e72-e651-45d5-9077-1a84d6b12309 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.334389608Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=59b39e72-e651-45d5-9077-1a84d6b12309 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:08:58 embed-certs-20210813205917-30853 crio[2037]: time="2021-08-13 21:08:58.334654379Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271,PodSandboxId:b88a0e8366b2082bba9149c65441bce987946140839685e1e31fb3e7e8dfc4b8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888915293919082,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-bvcl6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 87e4c5d0-a1f9-4f5a-9c80-aba83055f746,},Annotations:map[string]string{io.kubernetes.container.hash: 5661eda4,io.kub
ernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f,PodSandboxId:33266a9854848201da6d3746eb07c84df14f0a592aef740a370d05c7a6ae184b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888906357994513,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-77xxt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6e067ab8-6535-4
984-8dcf-037619871a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9e6c25e5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce,PodSandboxId:f6a238f5e8f905decf70ba6d0798c0b55f00e62eedd0b9a6ade76ca5950a7b48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888905617777808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d6e8f-2aca-49ac-913a-73ddf71ae6ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5739bdfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca,PodSandboxId:91973f54aaaf504ee899dc1a81b7c613fbd42f31ef74508e24a08b2418bd53e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628888901983740106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-8bmrm,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 23a5740e-bd96-4bd0-851e-4abc81b7ddff,},Annotations:map[string]string{io.kubernetes.container.hash: c22ee817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945,PodSandboxId:bbd93e1b95832956025a082d4160af5cd395e606a1a5e8465d1fccdc5be2b46b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CON
TAINER_RUNNING,CreatedAt:1628888899956451546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szvqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116fa9a-0229-40cf-ae60-5d89fb7716f1,},Annotations:map[string]string{io.kubernetes.container.hash: dc4efc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982,PodSandboxId:1cbb2a7cfe7c8a46231fa3393c01b2c0266e93d5b7f315d062fdf8c207edcf7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628888
876728312962,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86af42d52587aa295e0638fccb1e3b1a,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a,PodSandboxId:e75ff20160ef72fd348e1fe876fe301c42c9cd75a24b5580e3fdce2a18b756c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888876321117668,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23543a71fe921e22bb392434067d227c,},Annotations:map[string]string{io.kubernetes.container.hash: a18fff1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50,PodSandboxId:3f133993f51d8960134179cacdd5de57d2ad7c7667476c1726bcc9ac836660a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628888876042084557,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe0184ae8cd21e5b44019a5cd9c7ffe6,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b,PodSandboxId:c650c93f5b421c954f8db6ffcbea1ab3b01bf3971fa0df279493ba5a4d08b1d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:16288888758234599
07,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-20210813205917-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3259cbcf4a901b9f2a43a4fa280a70ee,},Annotations:map[string]string{io.kubernetes.container.hash: a548740d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=59b39e72-e651-45d5-9077-1a84d6b12309 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                        ATTEMPT             POD ID
	1233d640b6fe4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   23 seconds ago       Exited              dashboard-metrics-scraper   1                   b88a0e8366b20
	91ce32446dbb6       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   32 seconds ago       Running             kubernetes-dashboard        0                   33266a9854848
	1ba3b441b51a5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   32 seconds ago       Running             storage-provisioner         0                   f6a238f5e8f90
	2a1b9a1d4a2b6       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   36 seconds ago       Running             coredns                     0                   91973f54aaaf5
	cba4967a2c2ca       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   38 seconds ago       Running             kube-proxy                  0                   bbd93e1b95832
	d1827f5ba3f77       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   About a minute ago   Running             kube-scheduler              0                   1cbb2a7cfe7c8
	8a1f73a982b2d       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   About a minute ago   Running             etcd                        0                   e75ff20160ef7
	9387b11356ea0       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   About a minute ago   Running             kube-controller-manager     0                   3f133993f51d8
	43af874e547f6       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   About a minute ago   Running             kube-apiserver              0                   c650c93f5b421
	
	* 
	* ==> coredns [2a1b9a1d4a2b67bbbeefe5b6df20742f76c81a3bf37133e403fc6b8a167092ca] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +4.559950] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.040924] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.084946] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1727 comm=systemd-network
	[  +0.826895] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +2.247774] vboxguest: loading out-of-tree module taints kernel.
	[  +0.005907] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug13 21:02] systemd-fstab-generator[2133]: Ignoring "noauto" for root device
	[  +0.183630] systemd-fstab-generator[2146]: Ignoring "noauto" for root device
	[  +0.260871] systemd-fstab-generator[2172]: Ignoring "noauto" for root device
	[  +6.767201] systemd-fstab-generator[2362]: Ignoring "noauto" for root device
	[ +17.663697] kauditd_printk_skb: 38 callbacks suppressed
	[ +13.643891] kauditd_printk_skb: 107 callbacks suppressed
	[Aug13 21:03] kauditd_printk_skb: 2 callbacks suppressed
	[ +37.908192] NFSD: Unable to end grace period: -110
	[Aug13 21:07] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.326104] kauditd_printk_skb: 14 callbacks suppressed
	[ +14.799084] systemd-fstab-generator[6047]: Ignoring "noauto" for root device
	[Aug13 21:08] systemd-fstab-generator[6428]: Ignoring "noauto" for root device
	[ +15.462471] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.808060] kauditd_printk_skb: 68 callbacks suppressed
	[  +8.969890] kauditd_printk_skb: 8 callbacks suppressed
	[  +8.090698] systemd-fstab-generator[7892]: Ignoring "noauto" for root device
	[  +0.825336] systemd-fstab-generator[7946]: Ignoring "noauto" for root device
	[  +1.031372] systemd-fstab-generator[8000]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [8a1f73a982b2d8a6a01b2ce2f1ddf5dc9ca0c8bf47bc2dbc93a31761b458395a] <==
	* 2021-08-13 21:07:56.806650 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-13 21:07:56.817028 I | etcdserver: 45ea9d8f303c08fa as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/13 21:07:56 INFO: 45ea9d8f303c08fa switched to configuration voters=(5038012371482446074)
	2021-08-13 21:07:56.822242 I | etcdserver/membership: added member 45ea9d8f303c08fa [https://192.168.39.156:2380] to cluster d1f5bcbb1e4f2572
	2021-08-13 21:07:56.825435 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 21:07:56.825666 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 21:07:56.825808 I | embed: listening for peers on 192.168.39.156:2380
	raft2021/08/13 21:07:56 INFO: 45ea9d8f303c08fa is starting a new election at term 1
	raft2021/08/13 21:07:56 INFO: 45ea9d8f303c08fa became candidate at term 2
	raft2021/08/13 21:07:56 INFO: 45ea9d8f303c08fa received MsgVoteResp from 45ea9d8f303c08fa at term 2
	raft2021/08/13 21:07:56 INFO: 45ea9d8f303c08fa became leader at term 2
	raft2021/08/13 21:07:56 INFO: raft.node: 45ea9d8f303c08fa elected leader 45ea9d8f303c08fa at term 2
	2021-08-13 21:07:56.877936 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 21:07:56.883311 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 21:07:56.883457 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 21:07:56.883490 I | etcdserver: published {Name:embed-certs-20210813205917-30853 ClientURLs:[https://192.168.39.156:2379]} to cluster d1f5bcbb1e4f2572
	2021-08-13 21:07:56.885922 I | embed: ready to serve client requests
	2021-08-13 21:07:56.889110 I | embed: ready to serve client requests
	2021-08-13 21:07:56.890628 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 21:07:56.906080 I | embed: serving client requests on 192.168.39.156:2379
	2021-08-13 21:08:21.444097 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:08:23.770534 W | etcdserver: read-only range request "key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-certs\" " with result "range_response_count:0 size:5" took too long (116.65398ms) to execute
	2021-08-13 21:08:24.693511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:08:29.027443 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-zdlnb\" " with result "range_response_count:1 size:4480" took too long (164.618661ms) to execute
	2021-08-13 21:08:34.692683 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  21:09:08 up 7 min,  0 users,  load average: 1.44, 0.79, 0.39
	Linux embed-certs-20210813205917-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [43af874e547f66d91fcf2c0a064742224f715da7364542f5c4981b49c5822a9b] <==
	* I0813 21:08:01.721328       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0813 21:08:01.756931       1 controller.go:611] quota admission added evaluator for: namespaces
	I0813 21:08:02.511279       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 21:08:02.511302       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 21:08:02.529516       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 21:08:02.533470       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 21:08:02.533580       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 21:08:03.377568       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 21:08:03.429283       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 21:08:03.542496       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.39.156]
	I0813 21:08:03.543803       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 21:08:03.562423       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 21:08:04.190221       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 21:08:05.372658       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 21:08:05.451606       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 21:08:10.992596       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 21:08:18.341571       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 21:08:18.791922       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0813 21:08:25.206710       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 21:08:25.206984       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 21:08:25.206999       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 21:08:31.825652       1 client.go:360] parsed scheme: "passthrough"
	I0813 21:08:31.825702       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 21:08:31.825721       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [9387b11356ea0fae09161b9d66c6638a0f1a52fab558950802f168e1d7e78d50] <==
	* I0813 21:08:22.742977       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0813 21:08:22.840917       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 21:08:22.915612       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-qc7sb"
	I0813 21:08:23.384701       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 21:08:23.448806       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:08:23.485617       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 21:08:23.498760       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.499550       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:08:23.520670       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:08:23.537147       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.541926       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:08:23.549645       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.551660       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:08:23.591345       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.591709       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:08:23.594311       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.594769       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:08:23.615291       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.615656       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:08:23.628215       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.628577       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:08:23.644116       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:08:23.644266       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:08:23.672418       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-bvcl6"
	I0813 21:08:23.822383       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-77xxt"
	
	* 
	* ==> kube-proxy [cba4967a2c2ca57d2628939db9e88f4476a8e1ff61c410a243da3593b4795945] <==
	* I0813 21:08:20.240227       1 node.go:172] Successfully retrieved node IP: 192.168.39.156
	I0813 21:08:20.240302       1 server_others.go:140] Detected node IP 192.168.39.156
	W0813 21:08:20.240359       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 21:08:20.337066       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 21:08:20.337097       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 21:08:20.337319       1 server_others.go:212] Using iptables Proxier.
	I0813 21:08:20.339187       1 server.go:643] Version: v1.21.3
	I0813 21:08:20.343565       1 config.go:315] Starting service config controller
	I0813 21:08:20.343594       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 21:08:20.343635       1 config.go:224] Starting endpoint slice config controller
	I0813 21:08:20.343640       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 21:08:20.348629       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 21:08:20.356382       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 21:08:20.444457       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 21:08:20.444544       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [d1827f5ba3f77e78ef0fb97cdb9ee17ae177af486a3c0f424e20e249cecc1982] <==
	* E0813 21:08:01.748957       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:08:01.752246       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:08:01.752432       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:08:01.752673       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:08:01.752810       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:08:01.753010       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:08:01.753152       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:08:01.753218       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:08:01.753483       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:08:01.753591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:08:01.753659       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:08:01.758943       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:08:01.762207       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:08:02.640197       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:08:02.660929       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:08:02.708658       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:08:02.724689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:08:02.759438       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:08:02.777077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:08:02.906525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:08:02.956482       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:08:02.963077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:08:02.985745       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:08:03.253517       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0813 21:08:06.028996       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:01:48 UTC, end at Fri 2021-08-13 21:09:08 UTC. --
	Aug 13 21:08:23 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:23.947151    6437 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvjft\" (UniqueName: \"kubernetes.io/projected/87e4c5d0-a1f9-4f5a-9c80-aba83055f746-kube-api-access-bvjft\") pod \"dashboard-metrics-scraper-8685c45546-bvcl6\" (UID: \"87e4c5d0-a1f9-4f5a-9c80-aba83055f746\") "
	Aug 13 21:08:23 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:23.947354    6437 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/87e4c5d0-a1f9-4f5a-9c80-aba83055f746-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-bvcl6\" (UID: \"87e4c5d0-a1f9-4f5a-9c80-aba83055f746\") "
	Aug 13 21:08:24 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:24.048395    6437 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9w5v\" (UniqueName: \"kubernetes.io/projected/6e067ab8-6535-4984-8dcf-037619871a7e-kube-api-access-j9w5v\") pod \"kubernetes-dashboard-6fcdf4f6d-77xxt\" (UID: \"6e067ab8-6535-4984-8dcf-037619871a7e\") "
	Aug 13 21:08:24 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:24.048733    6437 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6e067ab8-6535-4984-8dcf-037619871a7e-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-77xxt\" (UID: \"6e067ab8-6535-4984-8dcf-037619871a7e\") "
	Aug 13 21:08:24 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:24.563317    6437 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:08:24 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:24.565194    6437 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:08:24 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:24.565448    6437 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8pp2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handle
r{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]V
olumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-qc7sb_kube-system(43aa1ab2-5284-4d76-b826-12fd50a0ba54): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:08:24 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:24.566602    6437 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-qc7sb" podUID=43aa1ab2-5284-4d76-b826-12fd50a0ba54
	Aug 13 21:08:25 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:25.487449    6437 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-qc7sb" podUID=43aa1ab2-5284-4d76-b826-12fd50a0ba54
	Aug 13 21:08:32 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:32.346799    6437 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/87e4c5d0-a1f9-4f5a-9c80-aba83055f746/etc-hosts with error exit status 1" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-bvcl6"
	Aug 13 21:08:32 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:32.370480    6437 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/43aa1ab2-5284-4d76-b826-12fd50a0ba54/etc-hosts with error exit status 1" pod="kube-system/metrics-server-7c784ccb57-qc7sb"
	Aug 13 21:08:35 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:35.082031    6437 scope.go:111] "RemoveContainer" containerID="ce36fb53702404bc5cdd5e36a22d39291db911cc24e730e55672b9450b4bc9e0"
	Aug 13 21:08:36 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:36.090037    6437 scope.go:111] "RemoveContainer" containerID="ce36fb53702404bc5cdd5e36a22d39291db911cc24e730e55672b9450b4bc9e0"
	Aug 13 21:08:36 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:36.090450    6437 scope.go:111] "RemoveContainer" containerID="1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271"
	Aug 13 21:08:36 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:36.090682    6437 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-bvcl6_kubernetes-dashboard(87e4c5d0-a1f9-4f5a-9c80-aba83055f746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-bvcl6" podUID=87e4c5d0-a1f9-4f5a-9c80-aba83055f746
	Aug 13 21:08:36 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:36.209029    6437 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:08:36 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:36.209066    6437 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:08:36 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:36.209169    6437 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8pp2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handle
r{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]V
olumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-qc7sb_kube-system(43aa1ab2-5284-4d76-b826-12fd50a0ba54): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:08:36 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:36.209206    6437 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-qc7sb" podUID=43aa1ab2-5284-4d76-b826-12fd50a0ba54
	Aug 13 21:08:37 embed-certs-20210813205917-30853 kubelet[6437]: I0813 21:08:37.104359    6437 scope.go:111] "RemoveContainer" containerID="1233d640b6fe419940fb33cbadeaf09f21a289c51c982b8c6ec07fd1dc929271"
	Aug 13 21:08:37 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:37.104734    6437 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-bvcl6_kubernetes-dashboard(87e4c5d0-a1f9-4f5a-9c80-aba83055f746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-bvcl6" podUID=87e4c5d0-a1f9-4f5a-9c80-aba83055f746
	Aug 13 21:08:42 embed-certs-20210813205917-30853 kubelet[6437]: E0813 21:08:42.622230    6437 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/43aa1ab2-5284-4d76-b826-12fd50a0ba54/etc-hosts with error exit status 1" pod="kube-system/metrics-server-7c784ccb57-qc7sb"
	Aug 13 21:08:43 embed-certs-20210813205917-30853 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:08:43 embed-certs-20210813205917-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:08:43 embed-certs-20210813205917-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [91ce32446dbb67c2233805427206f408b30c7576648d06079b600050c570399f] <==
	* 2021/08/13 21:08:26 Using namespace: kubernetes-dashboard
	2021/08/13 21:08:26 Using in-cluster config to connect to apiserver
	2021/08/13 21:08:26 Using secret token for csrf signing
	2021/08/13 21:08:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:08:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:08:26 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 21:08:26 Generating JWE encryption key
	2021/08/13 21:08:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:08:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:08:27 Initializing JWE encryption key from synchronized object
	2021/08/13 21:08:27 Creating in-cluster Sidecar client
	2021/08/13 21:08:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:08:27 Serving insecurely on HTTP port: 9090
	2021/08/13 21:08:26 Starting overwatch
	
	* 
	* ==> storage-provisioner [1ba3b441b51a5337d4c625f419ac7e6992602fe15a5d1f856e3b665f560500ce] <==
	* I0813 21:08:25.762110       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 21:08:25.811183       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 21:08:25.817968       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 21:08:25.857352       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 21:08:25.858970       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813205917-30853_5a7e4761-9efd-4312-9155-268d1305c244!
	I0813 21:08:25.860948       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7e524bc5-c9ee-4bad-a746-e755f69879e4", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210813205917-30853_5a7e4761-9efd-4312-9155-268d1305c244 became leader
	I0813 21:08:25.975265       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813205917-30853_5a7e4761-9efd-4312-9155-268d1305c244!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 21:09:08.556505   12289 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (26.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210813205823-30853 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-20210813205823-30853 --alsologtostderr -v=1: exit status 80 (2.575750416s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-20210813205823-30853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 21:09:02.927346   12353 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:09:02.927436   12353 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:09:02.927443   12353 out.go:311] Setting ErrFile to fd 2...
	I0813 21:09:02.927446   12353 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:09:02.927558   12353 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:09:02.927711   12353 out.go:305] Setting JSON to false
	I0813 21:09:02.927738   12353 mustload.go:65] Loading cluster: old-k8s-version-20210813205823-30853
	I0813 21:09:02.928076   12353 config.go:177] Loaded profile config "old-k8s-version-20210813205823-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 21:09:02.928481   12353 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:09:02.928524   12353 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:09:02.939693   12353 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38149
	I0813 21:09:02.940182   12353 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:09:02.940798   12353 main.go:130] libmachine: Using API Version  1
	I0813 21:09:02.940823   12353 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:09:02.941191   12353 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:09:02.941362   12353 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:09:02.944496   12353 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:09:02.944822   12353 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:09:02.944862   12353 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:09:02.955582   12353 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43761
	I0813 21:09:02.955956   12353 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:09:02.956447   12353 main.go:130] libmachine: Using API Version  1
	I0813 21:09:02.956475   12353 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:09:02.956801   12353 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:09:02.956952   12353 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:09:02.957559   12353 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-20210813205823-30853 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 21:09:02.960035   12353 out.go:177] * Pausing node old-k8s-version-20210813205823-30853 ... 
	I0813 21:09:02.960061   12353 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:09:02.960459   12353 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:09:02.960502   12353 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:09:02.971712   12353 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36425
	I0813 21:09:02.972122   12353 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:09:02.972514   12353 main.go:130] libmachine: Using API Version  1
	I0813 21:09:02.972535   12353 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:09:02.972890   12353 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:09:02.973084   12353 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:09:02.973305   12353 ssh_runner.go:149] Run: systemctl --version
	I0813 21:09:02.973331   12353 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:09:02.978590   12353 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:09:02.978982   12353 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:09:02.979011   12353 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:09:02.979093   12353 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:09:02.979259   12353 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:09:02.979388   12353 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:09:02.979484   12353 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:09:03.081525   12353 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:09:03.092731   12353 pause.go:50] kubelet running: true
	I0813 21:09:03.092796   12353 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:09:03.382014   12353 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:09:03.382106   12353 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:09:03.523049   12353 cri.go:76] found id: "c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4"
	I0813 21:09:03.523077   12353 cri.go:76] found id: "0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6"
	I0813 21:09:03.523084   12353 cri.go:76] found id: "0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49"
	I0813 21:09:03.523090   12353 cri.go:76] found id: "8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db"
	I0813 21:09:03.523095   12353 cri.go:76] found id: "974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e"
	I0813 21:09:03.523101   12353 cri.go:76] found id: "4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770"
	I0813 21:09:03.523107   12353 cri.go:76] found id: "8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8"
	I0813 21:09:03.523113   12353 cri.go:76] found id: "02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3"
	I0813 21:09:03.523119   12353 cri.go:76] found id: "fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90"
	I0813 21:09:03.523130   12353 cri.go:76] found id: "5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6"
	I0813 21:09:03.523134   12353 cri.go:76] found id: ""
	I0813 21:09:03.523173   12353 ssh_runner.go:149] Run: sudo runc list -f json

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p old-k8s-version-20210813205823-30853 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813205823-30853 -n old-k8s-version-20210813205823-30853
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813205823-30853 -n old-k8s-version-20210813205823-30853: exit status 2 (257.956142ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210813205823-30853 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20210813205823-30853 logs -n 25: (1.278392865s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| ssh     | -p                                                | flannel-20210813204703-30853                    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:03 UTC | Fri, 13 Aug 2021 20:59:03 UTC |
	|         | flannel-20210813204703-30853                      |                                                 |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                               |                               |
	| delete  | -p bridge-20210813204703-30853                    | bridge-20210813204703-30853                     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:14 UTC | Fri, 13 Aug 2021 20:59:15 UTC |
	| delete  | -p                                                | flannel-20210813204703-30853                    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:15 UTC | Fri, 13 Aug 2021 20:59:17 UTC |
	|         | flannel-20210813204703-30853                      |                                                 |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:23 UTC | Fri, 13 Aug 2021 21:00:44 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:56 UTC | Fri, 13 Aug 2021 21:00:57 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:57 UTC | Fri, 13 Aug 2021 21:01:00 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:00 UTC | Fri, 13 Aug 2021 21:01:00 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210813204600-30853         | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:01 UTC | Fri, 13 Aug 2021 21:01:02 UTC |
	|         | kubernetes-upgrade-20210813204600-30853           |                                                 |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210813210102-30853      | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:02 UTC | Fri, 13 Aug 2021 21:01:02 UTC |
	|         | disable-driver-mounts-20210813210102-30853        |                                                 |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:17 UTC | Fri, 13 Aug 2021 21:01:05 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:18 UTC | Fri, 13 Aug 2021 21:01:19 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:19 UTC | Fri, 13 Aug 2021 21:01:23 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:23 UTC | Fri, 13 Aug 2021 21:01:23 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:15 UTC | Fri, 13 Aug 2021 21:02:15 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:27 UTC | Fri, 13 Aug 2021 21:02:28 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:02 UTC | Fri, 13 Aug 2021 21:03:15 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                 |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio           |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:26 UTC | Fri, 13 Aug 2021 21:03:27 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:27 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:28 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:32 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:23 UTC | Fri, 13 Aug 2021 21:08:32 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:08:42 UTC | Fri, 13 Aug 2021 21:08:43 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                 |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:00 UTC | Fri, 13 Aug 2021 21:08:52 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                 |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:02 UTC | Fri, 13 Aug 2021 21:09:02 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                 |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:03:32
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:03:32.257678   11600 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:03:32.257760   11600 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:03:32.257764   11600 out.go:311] Setting ErrFile to fd 2...
	I0813 21:03:32.257767   11600 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:03:32.257889   11600 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:03:32.258149   11600 out.go:305] Setting JSON to false
	I0813 21:03:32.297164   11600 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":9974,"bootTime":1628878638,"procs":184,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:03:32.297442   11600 start.go:121] virtualization: kvm guest
	I0813 21:03:32.300208   11600 out.go:177] * [no-preload-20210813205915-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:03:32.301763   11600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:03:32.300370   11600 notify.go:169] Checking for updates...
	I0813 21:03:32.303324   11600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:03:32.304875   11600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:03:32.306390   11600 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:03:32.306988   11600 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:03:32.307576   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:03:32.307638   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:03:32.319235   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34929
	I0813 21:03:32.319644   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:03:32.320320   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:03:32.320347   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:03:32.320748   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:03:32.320979   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:03:32.321189   11600 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:03:32.321646   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:03:32.321692   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:03:32.332966   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45825
	I0813 21:03:32.333332   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:03:32.333819   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:03:32.333847   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:03:32.334199   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:03:32.334372   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:03:32.365034   11600 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 21:03:32.365061   11600 start.go:278] selected driver: kvm2
	I0813 21:03:32.365067   11600 start.go:751] validating driver "kvm2" against &{Name:no-preload-20210813205915-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.107 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:03:32.365197   11600 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:03:32.367047   11600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.367426   11600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:03:32.378154   11600 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:03:32.378447   11600 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 21:03:32.378474   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:03:32.378482   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:03:32.378489   11600 start_flags.go:277] config:
	{Name:no-preload-20210813205915-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.107 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Li
stenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:03:32.378585   11600 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:30.512688   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:33.010993   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:32.670472   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:35.171315   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:30.963285   11447 out.go:177] * Restarting existing kvm2 VM for "default-k8s-different-port-20210813210102-30853" ...
	I0813 21:03:30.963310   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Start
	I0813 21:03:30.963467   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Ensuring networks are active...
	I0813 21:03:30.965431   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Ensuring network default is active
	I0813 21:03:30.965733   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Ensuring network mk-default-k8s-different-port-20210813210102-30853 is active
	I0813 21:03:30.966083   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Getting domain xml...
	I0813 21:03:30.968061   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Creating domain...
	I0813 21:03:31.416170   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Waiting to get IP...
	I0813 21:03:31.417365   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.418005   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has current primary IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.418042   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Found IP for machine: 192.168.50.136
	I0813 21:03:31.418064   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Reserving static IP address...
	I0813 21:03:31.418520   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "default-k8s-different-port-20210813210102-30853", mac: "52:54:00:37:ca:98", ip: "192.168.50.136"} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:01:32 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:31.418572   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | skip adding static IP to network mk-default-k8s-different-port-20210813210102-30853 - found existing host DHCP lease matching {name: "default-k8s-different-port-20210813210102-30853", mac: "52:54:00:37:ca:98", ip: "192.168.50.136"}
	I0813 21:03:31.418592   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Reserved static IP address: 192.168.50.136
	I0813 21:03:31.418609   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Waiting for SSH to be available...
	I0813 21:03:31.418628   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Getting to WaitForSSH function...
	I0813 21:03:31.424645   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.425050   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:01:32 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:31.425182   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.425389   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH client type: external
	I0813 21:03:31.425422   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa (-rw-------)
	I0813 21:03:31.425464   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:03:31.425482   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | About to run SSH command:
	I0813 21:03:31.425509   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | exit 0
	I0813 21:03:32.380458   11600 out.go:177] * Starting control plane node no-preload-20210813205915-30853 in cluster no-preload-20210813205915-30853
	I0813 21:03:32.380479   11600 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:03:32.380628   11600 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/config.json ...
	I0813 21:03:32.380658   11600 cache.go:108] acquiring lock: {Name:mkb38baead8d508ff836651dee18a7788cf32c81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380644   11600 cache.go:108] acquiring lock: {Name:mk46180cf67d5c541fa2597ef8e0122b51c3d66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380670   11600 cache.go:108] acquiring lock: {Name:mk7bb3b696fd3372110b0be599d95315e027c7ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380696   11600 cache.go:108] acquiring lock: {Name:mkf1d6f5d79a8fed4d2cc99505f5f3464b88e46a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380719   11600 cache.go:108] acquiring lock: {Name:mk828c96511ca39b5ec24da9b6afedd4727bdcf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380743   11600 cache.go:108] acquiring lock: {Name:mk03e6bcc333bfad143239419641099a94fed11e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380784   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 21:03:32.380790   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0813 21:03:32.380787   11600 cache.go:108] acquiring lock: {Name:mk928ab7caca14c2ebd27b364dc38d466ea61870 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380747   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0813 21:03:32.380809   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 21:03:32.380803   11600 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 161.844µs
	I0813 21:03:32.380822   11600 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 21:03:32.380808   11600 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 149.17µs
	I0813 21:03:32.380819   11600 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 164.006µs
	I0813 21:03:32.380839   11600 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0813 21:03:32.380837   11600 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:03:32.380848   11600 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0813 21:03:32.380801   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0813 21:03:32.380838   11600 cache.go:108] acquiring lock: {Name:mk3d501986e0e48ddd0db3c6e93347910f1116e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380854   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0813 21:03:32.380853   11600 cache.go:108] acquiring lock: {Name:mkf7939d465d516c835d7d7703c105943f1ade9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380867   11600 start.go:313] acquiring machines lock for no-preload-20210813205915-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:03:32.380868   11600 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 155.968µs
	I0813 21:03:32.380881   11600 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0813 21:03:32.380876   11600 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 155.847µs
	I0813 21:03:32.380760   11600 cache.go:108] acquiring lock: {Name:mkec6e53ab9796f80ec65d6b99a6c3ee881fedd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380890   11600 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380896   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0813 21:03:32.380899   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0813 21:03:32.380841   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0813 21:03:32.380909   11600 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 73.516µs
	I0813 21:03:32.380913   11600 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 62.387µs
	I0813 21:03:32.380921   11600 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380939   11600 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380925   11600 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 136.425µs
	I0813 21:03:32.380966   11600 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380936   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 21:03:32.380982   11600 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 225.197µs
	I0813 21:03:32.380995   11600 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 21:03:32.380828   11600 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.9µs
	I0813 21:03:32.381004   11600 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 21:03:32.381012   11600 cache.go:88] Successfully saved all images to host disk.
	I0813 21:03:35.012590   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:37.514197   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:37.669098   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:40.168374   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:40.013348   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:42.014535   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:42.670990   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:44.671751   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:43.440320   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | SSH cmd err, output: exit status 255: 
	I0813 21:03:43.440353   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0813 21:03:43.440363   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | command : exit 0
	I0813 21:03:43.440369   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | err     : exit status 255
	I0813 21:03:43.440381   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | output  : 
	I0813 21:03:47.896090   11600 start.go:317] acquired machines lock for "no-preload-20210813205915-30853" in 15.515202861s
	I0813 21:03:47.896143   11600 start.go:93] Skipping create...Using existing machine configuration
	I0813 21:03:47.896154   11600 fix.go:55] fixHost starting: 
	I0813 21:03:47.896500   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:03:47.896553   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:03:47.909531   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37953
	I0813 21:03:47.909942   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:03:47.910569   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:03:47.910588   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:03:47.910953   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:03:47.911154   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:03:47.911327   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:03:47.913763   11600 fix.go:108] recreateIfNeeded on no-preload-20210813205915-30853: state=Stopped err=<nil>
	I0813 21:03:47.913791   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	W0813 21:03:47.913946   11600 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 21:03:44.511774   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:46.514028   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:48.515447   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:47.170765   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:49.174655   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:46.440683   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Getting to WaitForSSH function...
	I0813 21:03:46.445948   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.446304   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.446340   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.446496   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH client type: external
	I0813 21:03:46.446533   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa (-rw-------)
	I0813 21:03:46.446579   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:03:46.446601   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | About to run SSH command:
	I0813 21:03:46.446618   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | exit 0
	I0813 21:03:46.582984   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 21:03:46.583312   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetConfigRaw
	I0813 21:03:46.584076   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:03:46.589266   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.589559   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.589588   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.589810   11447 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/config.json ...
	I0813 21:03:46.590017   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:46.590212   11447 machine.go:88] provisioning docker machine ...
	I0813 21:03:46.590232   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:46.590407   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetMachineName
	I0813 21:03:46.590545   11447 buildroot.go:166] provisioning hostname "default-k8s-different-port-20210813210102-30853"
	I0813 21:03:46.590576   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetMachineName
	I0813 21:03:46.590701   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.595270   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.595544   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.595577   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.595711   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:46.595884   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.596013   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.596117   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:46.596285   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:46.596463   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:46.596478   11447 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210813210102-30853 && echo "default-k8s-different-port-20210813210102-30853" | sudo tee /etc/hostname
	I0813 21:03:46.733223   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210813210102-30853
	
	I0813 21:03:46.733252   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.739002   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.739323   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.739359   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.739481   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:46.739690   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.739849   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.739990   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:46.740161   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:46.740320   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:46.740349   11447 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210813210102-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210813210102-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210813210102-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:03:46.872322   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:03:46.872366   11447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:03:46.872403   11447 buildroot.go:174] setting up certificates
	I0813 21:03:46.872413   11447 provision.go:83] configureAuth start
	I0813 21:03:46.872433   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetMachineName
	I0813 21:03:46.872715   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:03:46.878075   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.878404   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.878459   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.878540   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.882767   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.883077   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.883108   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.883225   11447 provision.go:138] copyHostCerts
	I0813 21:03:46.883299   11447 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:03:46.883314   11447 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:03:46.883398   11447 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:03:46.883517   11447 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:03:46.883530   11447 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:03:46.883563   11447 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:03:46.883642   11447 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:03:46.883654   11447 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:03:46.883682   11447 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 21:03:46.883763   11447 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210813210102-30853 san=[192.168.50.136 192.168.50.136 localhost 127.0.0.1 minikube default-k8s-different-port-20210813210102-30853]
	I0813 21:03:46.987158   11447 provision.go:172] copyRemoteCerts
	I0813 21:03:46.987214   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:03:46.987238   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.992216   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.992440   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.992475   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.992656   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:46.992817   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.992969   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:46.993066   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:47.083216   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0813 21:03:47.100865   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:03:47.117328   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:03:47.134074   11447 provision.go:86] duration metric: configureAuth took 261.642322ms
	I0813 21:03:47.134094   11447 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:03:47.134262   11447 config.go:177] Loaded profile config "default-k8s-different-port-20210813210102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:03:47.134353   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.139472   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.139780   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.139807   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.139944   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.140097   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.140275   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.140411   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.140599   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:47.140769   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:47.140790   11447 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 21:03:47.633895   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 21:03:47.633930   11447 machine.go:91] provisioned docker machine in 1.043703131s
	I0813 21:03:47.633942   11447 start.go:267] post-start starting for "default-k8s-different-port-20210813210102-30853" (driver="kvm2")
	I0813 21:03:47.633950   11447 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:03:47.633971   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.634293   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:03:47.634328   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.639277   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.639636   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.639663   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.639786   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.639947   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.640111   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.640242   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:47.734400   11447 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:03:47.740052   11447 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:03:47.740071   11447 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:03:47.740130   11447 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:03:47.740231   11447 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 21:03:47.740344   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:03:47.747174   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:03:47.764416   11447 start.go:270] post-start completed in 130.462296ms
	I0813 21:03:47.764450   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.764711   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.770040   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.770384   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.770431   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.770530   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.770719   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.770894   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.771070   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.771253   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:47.771444   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:47.771459   11447 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:03:47.895861   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888627.837623344
	
	I0813 21:03:47.895892   11447 fix.go:212] guest clock: 1628888627.837623344
	I0813 21:03:47.895903   11447 fix.go:225] Guest: 2021-08-13 21:03:47.837623344 +0000 UTC Remote: 2021-08-13 21:03:47.764694239 +0000 UTC m=+16.980843358 (delta=72.929105ms)
	I0813 21:03:47.895929   11447 fix.go:196] guest clock delta is within tolerance: 72.929105ms
	I0813 21:03:47.895937   11447 fix.go:57] fixHost completed within 16.950003029s
	I0813 21:03:47.895942   11447 start.go:80] releasing machines lock for "default-k8s-different-port-20210813210102-30853", held for 16.950031669s
	I0813 21:03:47.896001   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.896297   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:03:47.901493   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.901838   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.901870   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.902050   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.902228   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.902715   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.902976   11447 ssh_runner.go:149] Run: systemctl --version
	I0813 21:03:47.902995   11447 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:03:47.903007   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.903040   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.909125   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.909422   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.909452   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.909630   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.909813   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.909935   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.910059   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:47.910088   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.910489   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.910527   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.910654   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.910777   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.910927   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.911072   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:48.006087   11447 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 21:03:48.006215   11447 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:03:47.916188   11600 out.go:177] * Restarting existing kvm2 VM for "no-preload-20210813205915-30853" ...
	I0813 21:03:47.916218   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Start
	I0813 21:03:47.916374   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring networks are active...
	I0813 21:03:47.918363   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring network default is active
	I0813 21:03:47.918666   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring network mk-no-preload-20210813205915-30853 is active
	I0813 21:03:47.919177   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Getting domain xml...
	I0813 21:03:47.921207   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Creating domain...
	I0813 21:03:48.385941   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Waiting to get IP...
	I0813 21:03:48.387086   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.387686   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Found IP for machine: 192.168.105.107
	I0813 21:03:48.387718   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Reserving static IP address...
	I0813 21:03:48.387738   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has current primary IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.388204   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "no-preload-20210813205915-30853", mac: "52:54:00:60:d2:3d", ip: "192.168.105.107"} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 21:59:33 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:03:48.388236   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Reserved static IP address: 192.168.105.107
	I0813 21:03:48.388276   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | skip adding static IP to network mk-no-preload-20210813205915-30853 - found existing host DHCP lease matching {name: "no-preload-20210813205915-30853", mac: "52:54:00:60:d2:3d", ip: "192.168.105.107"}
	I0813 21:03:48.388306   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Getting to WaitForSSH function...
	I0813 21:03:48.388326   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Waiting for SSH to be available...
	I0813 21:03:48.393946   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.394418   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 21:59:33 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:03:48.394445   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.394706   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH client type: external
	I0813 21:03:48.394790   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa (-rw-------)
	I0813 21:03:48.394865   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:03:48.394885   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | About to run SSH command:
	I0813 21:03:48.394902   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | exit 0
	I0813 21:03:51.014322   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:53.517299   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:51.667636   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:53.672798   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:52.032310   11447 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.026067051s)
	I0813 21:03:52.032472   11447 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 21:03:52.032533   11447 ssh_runner.go:149] Run: which lz4
	I0813 21:03:52.036917   11447 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:03:52.041879   11447 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:03:52.041911   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0813 21:03:54.836023   11447 crio.go:362] Took 2.799141 seconds to copy over tarball
	I0813 21:03:54.836104   11447 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:03:56.016199   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:58.747725   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:56.174092   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:58.745387   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:57.599639   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | SSH cmd err, output: exit status 255: 
	I0813 21:03:58.136181   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0813 21:03:58.136210   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | command : exit 0
	I0813 21:03:58.136247   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | err     : exit status 255
	I0813 21:03:58.136301   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | output  : 
	I0813 21:04:00.599792   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Getting to WaitForSSH function...
	I0813 21:04:00.606127   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:00.606561   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:00.606599   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:00.606684   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH client type: external
	I0813 21:04:00.606710   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa (-rw-------)
	I0813 21:04:00.606759   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:04:00.606779   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | About to run SSH command:
	I0813 21:04:00.606791   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | exit 0
	I0813 21:04:01.865012   11447 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (7.028876371s)
	I0813 21:04:01.865051   11447 crio.go:369] Took 7.028990 seconds t extract the tarball
	I0813 21:04:01.865065   11447 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:04:01.909459   11447 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 21:04:01.921741   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 21:04:01.931836   11447 docker.go:153] disabling docker service ...
	I0813 21:04:01.931885   11447 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:04:01.943769   11447 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:04:01.957001   11447 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:04:02.141489   11447 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:04:02.286672   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:04:02.301487   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:04:02.316482   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 21:04:02.324481   11447 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:04:02.332086   11447 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:04:02.332135   11447 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:04:02.348397   11447 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:04:02.355704   11447 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:04:02.519419   11447 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 21:04:02.853377   11447 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 21:04:02.853455   11447 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 21:04:02.859109   11447 start.go:413] Will wait 60s for crictl version
	I0813 21:04:02.859179   11447 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:04:02.895788   11447 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 21:04:02.895871   11447 ssh_runner.go:149] Run: crio --version
	I0813 21:04:02.973856   11447 ssh_runner.go:149] Run: crio --version
	I0813 21:04:01.014560   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:03.513509   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:01.169481   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:04.824663   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:04.802040   11447 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 21:04:04.802102   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:04:04.808733   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:04:04.809248   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:04:04.809286   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:04:04.809574   11447 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0813 21:04:04.815288   11447 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:04.828595   11447 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 21:04:04.828664   11447 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:04:04.877574   11447 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:04:04.877604   11447 crio.go:333] Images already preloaded, skipping extraction
	I0813 21:04:04.877660   11447 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:04:04.914222   11447 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:04:04.914249   11447 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:04:04.914336   11447 ssh_runner.go:149] Run: crio config
	I0813 21:04:05.157389   11447 cni.go:93] Creating CNI manager for ""
	I0813 21:04:05.157412   11447 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:04:05.157424   11447 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:04:05.157439   11447 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.136 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210813210102-30853 NodeName:default-k8s-different-port-20210813210102-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.136
CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:04:05.157622   11447 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.136
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "default-k8s-different-port-20210813210102-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:04:05.157727   11447 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=default-k8s-different-port-20210813210102-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.136 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813210102-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0813 21:04:05.157774   11447 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 21:04:05.167087   11447 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:04:05.167155   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:04:05.175473   11447 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (528 bytes)
	I0813 21:04:05.188753   11447 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 21:04:05.201467   11447 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0813 21:04:05.215461   11447 ssh_runner.go:149] Run: grep 192.168.50.136	control-plane.minikube.internal$ /etc/hosts
	I0813 21:04:05.220200   11447 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:05.231726   11447 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853 for IP: 192.168.50.136
	I0813 21:04:05.231797   11447 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:04:05.231825   11447 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:04:05.231898   11447 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.key
	I0813 21:04:05.231928   11447 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/apiserver.key.cb5546de
	I0813 21:04:05.231952   11447 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/proxy-client.key
	I0813 21:04:05.232111   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 21:04:05.232165   11447 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 21:04:05.232188   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:04:05.232232   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:04:05.232271   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:04:05.232307   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 21:04:05.232379   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:04:05.233804   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:04:05.253715   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:04:05.273351   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:04:05.290830   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 21:04:05.308416   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:04:05.326529   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 21:04:05.346664   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:04:05.364492   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 21:04:05.381949   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:04:05.399680   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 21:04:05.419759   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 21:04:05.438209   11447 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:04:05.450680   11447 ssh_runner.go:149] Run: openssl version
	I0813 21:04:05.457245   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:04:05.465670   11447 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:05.470976   11447 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:05.471018   11447 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:05.477477   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:04:05.486446   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 21:04:05.494612   11447 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 21:04:05.499391   11447 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 21:04:05.499438   11447 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 21:04:05.505622   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 21:04:05.514421   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 21:04:05.523408   11447 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 21:04:05.528337   11447 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 21:04:05.528382   11447 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 21:04:05.535765   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:04:05.544593   11447 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210813210102-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813210102-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.136 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:tr
ue system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:04:05.544684   11447 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 21:04:05.544726   11447 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:05.585256   11447 cri.go:76] found id: ""
	I0813 21:04:05.585334   11447 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:04:05.593681   11447 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:04:05.593711   11447 kubeadm.go:600] restartCluster start
	I0813 21:04:05.593760   11447 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:04:05.602117   11447 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:05.603061   11447 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210813210102-30853" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:04:05.603385   11447 kubeconfig.go:128] "default-k8s-different-port-20210813210102-30853" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:04:05.604147   11447 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:04:05.606733   11447 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:04:05.614257   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:05.614297   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:05.624492   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:02.775071   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 21:04:02.775420   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetConfigRaw
	I0813 21:04:02.776115   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:02.782201   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.782674   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:02.782712   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.783141   11600 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/config.json ...
	I0813 21:04:02.783367   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:02.783571   11600 machine.go:88] provisioning docker machine ...
	I0813 21:04:02.783598   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:02.783770   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetMachineName
	I0813 21:04:02.783946   11600 buildroot.go:166] provisioning hostname "no-preload-20210813205915-30853"
	I0813 21:04:02.783971   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetMachineName
	I0813 21:04:02.784147   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:02.789849   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.790287   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:02.790320   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.790441   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:02.790578   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.790777   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.790928   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:02.791095   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:02.791315   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:02.791336   11600 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210813205915-30853 && echo "no-preload-20210813205915-30853" | sudo tee /etc/hostname
	I0813 21:04:02.946559   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210813205915-30853
	
	I0813 21:04:02.946596   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:02.952957   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.953358   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:02.953393   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.953568   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:02.953745   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.953960   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.954167   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:02.954385   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:02.954624   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:02.954665   11600 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210813205915-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210813205915-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210813205915-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:04:03.094292   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:04:03.094324   11600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:04:03.094356   11600 buildroot.go:174] setting up certificates
	I0813 21:04:03.094369   11600 provision.go:83] configureAuth start
	I0813 21:04:03.094384   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetMachineName
	I0813 21:04:03.094688   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:03.100354   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.100706   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.100739   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.100946   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:03.105867   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.106237   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.106310   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.106463   11600 provision.go:138] copyHostCerts
	I0813 21:04:03.106530   11600 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:04:03.106543   11600 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:04:03.106590   11600 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:04:03.106682   11600 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:04:03.106693   11600 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:04:03.106720   11600 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:04:03.106783   11600 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:04:03.106793   11600 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:04:03.106815   11600 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 21:04:03.106882   11600 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210813205915-30853 san=[192.168.105.107 192.168.105.107 localhost 127.0.0.1 minikube no-preload-20210813205915-30853]
	I0813 21:04:03.232637   11600 provision.go:172] copyRemoteCerts
	I0813 21:04:03.232735   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:04:03.232781   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:03.238750   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.239227   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.239262   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.239441   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:03.239634   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:03.239802   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:03.239979   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:03.330067   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:04:03.347432   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 21:04:03.580187   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:04:03.733835   11600 provision.go:86] duration metric: configureAuth took 639.447362ms
	I0813 21:04:03.733873   11600 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:04:03.734092   11600 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:04:03.734225   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:03.740654   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.741046   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.741091   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.741217   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:03.741420   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:03.741586   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:03.741748   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:03.741941   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:03.742078   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:03.742093   11600 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 21:04:04.399833   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 21:04:04.399867   11600 machine.go:91] provisioned docker machine in 1.616277375s
	I0813 21:04:04.399881   11600 start.go:267] post-start starting for "no-preload-20210813205915-30853" (driver="kvm2")
	I0813 21:04:04.399888   11600 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:04:04.399909   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.400282   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:04:04.400324   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.406533   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.406945   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.406987   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.407240   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.407441   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.407578   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.407746   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:04.498949   11600 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:04:04.503867   11600 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:04:04.503896   11600 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:04:04.503972   11600 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:04:04.504097   11600 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 21:04:04.504223   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:04:04.511733   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:04:04.528408   11600 start.go:270] post-start completed in 128.513758ms
	I0813 21:04:04.528443   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.528707   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.534254   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.534663   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.534695   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.534799   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.534987   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.535140   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.535279   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.535426   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:04.535597   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:04.535608   11600 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:04:04.663945   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888644.593571707
	
	I0813 21:04:04.663967   11600 fix.go:212] guest clock: 1628888644.593571707
	I0813 21:04:04.663974   11600 fix.go:225] Guest: 2021-08-13 21:04:04.593571707 +0000 UTC Remote: 2021-08-13 21:04:04.528687546 +0000 UTC m=+32.319635142 (delta=64.884161ms)
	I0813 21:04:04.663992   11600 fix.go:196] guest clock delta is within tolerance: 64.884161ms
	I0813 21:04:04.663998   11600 fix.go:57] fixHost completed within 16.76784432s
	I0813 21:04:04.664002   11600 start.go:80] releasing machines lock for "no-preload-20210813205915-30853", held for 16.76787935s
	I0813 21:04:04.664032   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.664301   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:04.670385   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.670693   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.670728   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.670905   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.671084   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.671497   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.671741   11600 ssh_runner.go:149] Run: systemctl --version
	I0813 21:04:04.671770   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.671781   11600 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:04:04.671828   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.677842   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.677920   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.678239   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.678271   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.678303   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.678327   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.678385   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.678537   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.678601   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.678680   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.678746   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.678799   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.678866   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:04.678918   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:04.778153   11600 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:04:04.778247   11600 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 21:04:04.790123   11600 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 21:04:04.799742   11600 docker.go:153] disabling docker service ...
	I0813 21:04:04.799795   11600 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:04:04.814660   11600 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:04:04.826371   11600 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:04:04.984940   11600 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:04:05.134330   11600 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:04:05.146967   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:04:05.162919   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 21:04:05.171969   11600 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:04:05.178773   11600 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:04:05.178830   11600 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:04:05.195828   11600 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:04:05.202754   11600 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:04:05.337419   11600 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 21:04:05.559682   11600 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 21:04:05.559752   11600 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 21:04:05.566062   11600 start.go:413] Will wait 60s for crictl version
	I0813 21:04:05.566138   11600 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:04:05.601921   11600 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 21:04:05.602001   11600 ssh_runner.go:149] Run: crio --version
	I0813 21:04:05.842661   11600 ssh_runner.go:149] Run: crio --version
	I0813 21:04:05.956395   11600 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.2 ...
	I0813 21:04:05.956450   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:05.962605   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:05.962975   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:05.962999   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:05.963185   11600 ssh_runner.go:149] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0813 21:04:05.968381   11600 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:05.979746   11600 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:04:05.979790   11600 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:04:06.037577   11600 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 21:04:06.037602   11600 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 k8s.gcr.io/kube-proxy:v1.22.0-rc.0 k8s.gcr.io/pause:3.4.1 k8s.gcr.io/etcd:3.4.13-3 k8s.gcr.io/coredns/coredns:v1.8.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0813 21:04:06.037684   11600 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 21:04:06.037756   11600 image.go:133] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.0
	I0813 21:04:06.037772   11600 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:06.037684   11600 image.go:133] retrieving image: k8s.gcr.io/pause:3.4.1
	I0813 21:04:06.037785   11600 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:06.037762   11600 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-3
	I0813 21:04:06.037738   11600 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:06.037735   11600 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:06.037741   11600 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 21:04:06.037767   11600 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:06.039362   11600 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.22.0-rc.0: Error response from daemon: reference does not exist
	I0813 21:04:06.053753   11600 image.go:171] found k8s.gcr.io/pause:3.4.1 locally: &{Image:0xc000d620e0}
	I0813 21:04:06.053840   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/pause:3.4.1
	I0813 21:04:06.454088   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:06.627170   11600 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc000a3e0e0}
	I0813 21:04:06.627262   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:06.677125   11600 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" does not exist at hash "ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c" in container runtime
	I0813 21:04:06.677177   11600 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:06.677243   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:06.772729   11600 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc000a3e3e0}
	I0813 21:04:06.772826   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 21:04:06.829141   11600 image.go:171] found k8s.gcr.io/coredns/coredns:v1.8.0 locally: &{Image:0xc00142e1e0}
	I0813 21:04:06.829237   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns/coredns:v1.8.0
	I0813 21:04:06.902889   11600 cache_images.go:106] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0813 21:04:06.902989   11600 cri.go:205] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:06.903035   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:06.902933   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:07.109713   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.109813   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:07.109896   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.117259   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0 (exists)
	I0813 21:04:07.117279   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.117314   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.171175   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0813 21:04:07.171310   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0813 21:04:05.516944   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:08.013394   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:07.172226   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:09.188184   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:05.824992   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:05.825077   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:05.837175   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.025601   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.025691   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.036326   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.225644   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.225742   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.238574   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.425637   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.425737   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.438316   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.625622   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.625698   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.643437   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.824708   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.824784   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.840790   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.024978   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.025048   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.042237   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.225613   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.225690   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.238533   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.424924   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.425004   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.437239   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.625345   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.625418   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.643925   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.825147   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.825246   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.839517   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.024742   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.024831   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.037540   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.224652   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.224733   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.237758   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.425032   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.425121   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.438563   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.624675   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.624790   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.640197   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.640219   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.640266   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.654071   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.654097   11447 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:04:08.654106   11447 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:04:08.654124   11447 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:04:08.654177   11447 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:08.717698   11447 cri.go:76] found id: ""
	I0813 21:04:08.717795   11447 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:04:08.753323   11447 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:04:08.778307   11447 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:04:08.778369   11447 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:08.800125   11447 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:08.800151   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:09.316586   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:10.438674   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.122049553s)
	I0813 21:04:10.438715   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:07.759123   11600 image.go:171] found k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 locally: &{Image:0xc000d620e0}
	I0813 21:04:07.759237   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:09.111081   11600 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 locally: &{Image:0xc00142e040}
	I0813 21:04:09.111212   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:09.462306   11600 image.go:171] found k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 locally: &{Image:0xc00142e140}
	I0813 21:04:09.462414   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:10.255823   11600 image.go:171] found k8s.gcr.io/etcd:3.4.13-3 locally: &{Image:0xc0012f0120}
	I0813 21:04:10.255916   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.13-3
	I0813 21:04:11.315708   11600 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc000d62460}
	I0813 21:04:11.315815   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0813 21:04:10.514963   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:12.516333   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:11.670913   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:14.171134   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:10.800884   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:10.992029   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:11.167449   11447 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:04:11.167518   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:11.684011   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:12.184677   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:12.684502   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:13.184162   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:13.684035   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:14.183991   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:14.683969   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:15.184603   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:15.684380   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:13.372670   11600 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (6.201329225s)
	I0813 21:04:13.372706   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0: (6.255368199s)
	I0813 21:04:13.372718   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0813 21:04:13.372732   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 from cache
	I0813 21:04:13.372728   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.22.0-rc.0: (5.613461548s)
	I0813 21:04:13.372758   11600 crio.go:191] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0813 21:04:13.372783   11600 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" does not exist at hash "7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75" in container runtime
	I0813 21:04:13.372830   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.22.0-rc.0: (3.910399102s)
	I0813 21:04:13.372858   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0813 21:04:13.372868   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.13-3: (3.116939311s)
	I0813 21:04:13.372873   11600 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" does not exist at hash "b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a" in container runtime
	I0813 21:04:13.372900   11600 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:13.372831   11600 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:13.372924   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0: (2.057095132s)
	I0813 21:04:13.372931   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:13.372936   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:13.372786   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0: (4.261556732s)
	I0813 21:04:13.373009   11600 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" does not exist at hash "cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c" in container runtime
	I0813 21:04:13.373032   11600 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:13.373056   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:13.381245   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:13.381490   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:15.288527   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.915644282s)
	I0813 21:04:15.288559   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0813 21:04:15.288601   11600 ssh_runner.go:189] Completed: which crictl: (1.91552977s)
	I0813 21:04:15.288660   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:15.288670   11600 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.22.0-rc.0: (1.907403335s)
	I0813 21:04:15.288709   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:15.288741   11600 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.22.0-rc.0: (1.90722818s)
	I0813 21:04:15.288782   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.288805   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:15.288858   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.323185   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:15.323264   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0 (exists)
	I0813 21:04:15.323283   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.323302   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:15.323314   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0 (exists)
	I0813 21:04:15.323320   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.329111   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0 (exists)
	I0813 21:04:15.011212   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:17.011691   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:16.670490   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:19.170343   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:16.184356   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:16.684936   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:17.184954   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:17.684681   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:18.184911   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:18.684242   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:19.184095   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:19.683984   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:20.184175   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:20.210489   11447 api_server.go:70] duration metric: took 9.043039811s to wait for apiserver process to appear ...
	I0813 21:04:20.210519   11447 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:04:20.210533   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:20.211291   11447 api_server.go:255] stopped: https://192.168.50.136:8444/healthz: Get "https://192.168.50.136:8444/healthz": dial tcp 192.168.50.136:8444: connect: connection refused
	I0813 21:04:20.711989   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:21.745565   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0: (6.422201905s)
	I0813 21:04:21.745599   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 from cache
	I0813 21:04:21.745635   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:21.745691   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:19.017281   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:21.514778   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:23.515219   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:21.171057   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:23.670243   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:25.713040   11447 api_server.go:255] stopped: https://192.168.50.136:8444/healthz: Get "https://192.168.50.136:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:04:24.199550   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0: (2.45382894s)
	I0813 21:04:24.199592   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 from cache
	I0813 21:04:24.199629   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:24.199702   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:26.212134   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:26.605510   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:04:26.605545   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:04:26.711743   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:27.047887   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:04:27.047925   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:04:27.212219   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:27.218272   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:04:27.218303   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:04:27.711515   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:27.725621   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:04:27.725665   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:04:28.212046   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:28.224546   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 200:
	ok
	I0813 21:04:28.234553   11447 api_server.go:139] control plane version: v1.21.3
	I0813 21:04:28.234579   11447 api_server.go:129] duration metric: took 8.024053155s to wait for apiserver health ...
	I0813 21:04:28.234595   11447 cni.go:93] Creating CNI manager for ""
	I0813 21:04:28.234616   11447 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:04:26.019080   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:28.516769   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:25.670866   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:27.671923   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:30.171118   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:28.236904   11447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:04:28.236969   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:04:28.252383   11447 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:04:28.300743   11447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:04:28.320179   11447 system_pods.go:59] 8 kube-system pods found
	I0813 21:04:28.320225   11447 system_pods.go:61] "coredns-558bd4d5db-v2sv5" [3b82b811-5e28-41dc-b0e1-71233efc654e] Running
	I0813 21:04:28.320234   11447 system_pods.go:61] "etcd-default-k8s-different-port-20210813210102-30853" [89cff97c-ff5c-4920-a05f-1ec7b313043b] Running
	I0813 21:04:28.320241   11447 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813210102-30853" [734380ac-398d-4b51-a67f-aaac2457110c] Running
	I0813 21:04:28.320252   11447 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813210102-30853" [ebc5d291-624f-4c49-b9cb-436204a7665a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 21:04:28.320261   11447 system_pods.go:61] "kube-proxy-99cxm" [a1bfba1d-d9fb-4d24-abe9-fd0522c591f0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 21:04:28.320271   11447 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813210102-30853" [b66e01ad-943e-4a2c-aabe-d18f92fd5eb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0813 21:04:28.320290   11447 system_pods.go:61] "metrics-server-7c784ccb57-xfj59" [b522ac66-040a-4030-a817-c422c703b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:04:28.320308   11447 system_pods.go:61] "storage-provisioner" [d59ea453-ed7b-4952-bd61-7993245a1986] Running
	I0813 21:04:28.320315   11447 system_pods.go:74] duration metric: took 19.546937ms to wait for pod list to return data ...
	I0813 21:04:28.320330   11447 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:04:28.329682   11447 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:04:28.329749   11447 node_conditions.go:123] node cpu capacity is 2
	I0813 21:04:28.329769   11447 node_conditions.go:105] duration metric: took 9.429948ms to run NodePressure ...
	I0813 21:04:28.329793   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:29.546168   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.216348804s)
	I0813 21:04:29.546210   11447 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:04:29.563341   11447 kubeadm.go:746] kubelet initialised
	I0813 21:04:29.563369   11447 kubeadm.go:747] duration metric: took 17.148102ms waiting for restarted kubelet to initialise ...
	I0813 21:04:29.563380   11447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:04:29.573196   11447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace to be "Ready" ...
	I0813 21:04:29.338170   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0: (5.138437758s)
	I0813 21:04:29.338201   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 from cache
	I0813 21:04:29.338230   11600 cache_images.go:113] Successfully loaded all cached images
	I0813 21:04:29.338242   11600 cache_images.go:82] LoadImages completed in 23.300623842s
	I0813 21:04:29.338374   11600 ssh_runner.go:149] Run: crio config
	I0813 21:04:29.638116   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:04:29.638137   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:04:29.638149   11600 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:04:29.638162   11600 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.107 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210813205915-30853 NodeName:no-preload-20210813205915-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.105.107 CgroupDriver:systemd Cl
ientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:04:29.638336   11600 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "no-preload-20210813205915-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:04:29.638444   11600 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=no-preload-20210813205915-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.105.107 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:04:29.638511   11600 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 21:04:29.651119   11600 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:04:29.651199   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:04:29.658178   11600 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (518 bytes)
	I0813 21:04:29.674188   11600 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 21:04:29.689809   11600 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2086 bytes)
	I0813 21:04:29.704568   11600 ssh_runner.go:149] Run: grep 192.168.105.107	control-plane.minikube.internal$ /etc/hosts
	I0813 21:04:29.709516   11600 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:29.722084   11600 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853 for IP: 192.168.105.107
	I0813 21:04:29.722165   11600 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:04:29.722197   11600 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:04:29.722281   11600 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.key
	I0813 21:04:29.722312   11600 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/apiserver.key.209a1939
	I0813 21:04:29.722343   11600 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/proxy-client.key
	I0813 21:04:29.722473   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 21:04:29.722561   11600 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 21:04:29.722580   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:04:29.722661   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:04:29.722712   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:04:29.722757   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 21:04:29.722866   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:04:29.724368   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:04:29.746769   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:04:29.768192   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:04:29.786871   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 21:04:29.806532   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:04:29.825599   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 21:04:29.847494   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:04:29.870257   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 21:04:29.892328   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:04:29.912923   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 21:04:29.931703   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 21:04:29.951536   11600 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:04:29.968398   11600 ssh_runner.go:149] Run: openssl version
	I0813 21:04:29.976170   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:04:29.984473   11600 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:29.989429   11600 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:29.989476   11600 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:29.995576   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:04:30.003420   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 21:04:30.011665   11600 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 21:04:30.017989   11600 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 21:04:30.018036   11600 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 21:04:30.025928   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 21:04:30.036305   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 21:04:30.046763   11600 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 21:04:30.052505   11600 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 21:04:30.052558   11600 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 21:04:30.059983   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:04:30.068353   11600 kubeadm.go:390] StartCluster: {Name:no-preload-20210813205915-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.
0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.107 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:04:30.068511   11600 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 21:04:30.068563   11600 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:30.103079   11600 cri.go:76] found id: ""
	I0813 21:04:30.103167   11600 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:04:30.112165   11600 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:04:30.112188   11600 kubeadm.go:600] restartCluster start
	I0813 21:04:30.112242   11600 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:04:30.120196   11600 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.121712   11600 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210813205915-30853" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:04:30.122350   11600 kubeconfig.go:128] "no-preload-20210813205915-30853" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:04:30.123522   11600 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:04:30.127714   11600 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:04:30.134966   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.135011   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.144537   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.344893   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.345009   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.354676   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.544891   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.544966   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.554560   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.744600   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.744692   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.756935   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.945184   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.945265   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.955263   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.145650   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.145758   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.157682   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.344971   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.345039   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.354648   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.544933   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.545001   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.554862   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.745107   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.745178   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.756702   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.945036   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.945134   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.956052   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.145356   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.145486   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.154892   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.013514   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:33.515372   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:32.667378   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:34.671027   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:31.606937   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:33.614157   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:32.344907   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.344989   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.354828   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.545178   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.545268   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.554771   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.745015   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.745132   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.754451   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.945134   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.945223   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.958046   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:33.145379   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:33.145471   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:33.156311   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:33.156338   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:33.156387   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:33.166450   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:33.166479   11600 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:04:33.166489   11600 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:04:33.166504   11600 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:04:33.166556   11600 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:33.201224   11600 cri.go:76] found id: ""
	I0813 21:04:33.201320   11600 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:04:33.218274   11600 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:04:33.226895   11600 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:04:33.226953   11600 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:33.233603   11600 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:33.233633   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:33.409004   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.227200   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.522150   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.670047   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.781290   11600 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:04:34.781393   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:35.294318   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:35.794319   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:36.294093   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:36.794810   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:35.517996   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:38.013307   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:37.169398   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:39.667640   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:36.109861   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:38.110944   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:40.608444   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:37.294229   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:37.794174   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:38.294380   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:38.795081   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:39.295011   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:39.794912   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:40.294691   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:40.794676   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:41.294339   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:41.794517   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:40.514739   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:42.515815   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:41.674615   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:44.171008   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:43.111611   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:45.608557   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:42.294762   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:42.794735   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:43.294817   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:43.794556   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:43.818714   11600 api_server.go:70] duration metric: took 9.037423183s to wait for apiserver process to appear ...
	I0813 21:04:43.818749   11600 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:04:43.818763   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:43.819314   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": dial tcp 192.168.105.107:8443: connect: connection refused
	I0813 21:04:44.319959   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:45.012244   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:47.016481   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:46.672075   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:49.172907   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:47.615450   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:50.112038   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:49.320842   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:04:49.820028   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:49.514363   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:52.012464   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:51.669686   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:53.793699   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:52.607875   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:54.608704   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:54.821107   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:04:55.319665   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:54.013451   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:56.512870   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:58.517483   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:56.168752   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:58.169636   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:57.108818   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:59.110668   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:00.319940   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:05:00.819508   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:01.018546   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:03.515645   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:00.668977   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:02.670402   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:05.170956   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:01.618304   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:04.109034   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:05.157882   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:05:05.158001   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:05:05.320212   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:05.504416   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:05:05.504471   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:05:05.819967   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:05.864291   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:05:05.864338   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:05:06.319440   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:06.332338   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:05:06.332364   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:05:06.820046   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:06.827164   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 200:
	ok
	I0813 21:05:06.836155   11600 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:05:06.836176   11600 api_server.go:129] duration metric: took 23.017420085s to wait for apiserver health ...
	I0813 21:05:06.836188   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:05:06.836198   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:05:06.838586   11600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:05:06.838684   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:05:06.847037   11600 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:05:06.865264   11600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:05:06.893537   11600 system_pods.go:59] 8 kube-system pods found
	I0813 21:05:06.893572   11600 system_pods.go:61] "coredns-78fcd69978-wqktx" [84e2ed0e-2c5a-4dcc-a8de-2cee9f92d267] Running
	I0813 21:05:06.893578   11600 system_pods.go:61] "etcd-no-preload-20210813205915-30853" [de55bcf6-20c8-4b4a-81e0-b181cca0e618] Running
	I0813 21:05:06.893582   11600 system_pods.go:61] "kube-apiserver-no-preload-20210813205915-30853" [53002765-155d-4f17-b484-2fe4e088255d] Running
	I0813 21:05:06.893587   11600 system_pods.go:61] "kube-controller-manager-no-preload-20210813205915-30853" [6052be3c-51df-4a5c-b8a1-6a5a64b4d241] Running
	I0813 21:05:06.893594   11600 system_pods.go:61] "kube-proxy-vvkkd" [c6eef664-f71d-4d0f-aec7-8942b5977520] Running
	I0813 21:05:06.893599   11600 system_pods.go:61] "kube-scheduler-no-preload-20210813205915-30853" [24d521ca-7b13-4b06-805d-7b568471cffb] Running
	I0813 21:05:06.893615   11600 system_pods.go:61] "metrics-server-7c784ccb57-rfp5v" [8c3b111e-0b1d-4a36-85ab-49fe495a538e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:05:06.893629   11600 system_pods.go:61] "storage-provisioner" [dfb23af4-15d2-420e-8720-c4fee1cf94f8] Running
	I0813 21:05:06.893637   11600 system_pods.go:74] duration metric: took 28.354614ms to wait for pod list to return data ...
	I0813 21:05:06.893648   11600 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:05:06.916270   11600 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:05:06.916300   11600 node_conditions.go:123] node cpu capacity is 2
	I0813 21:05:06.916316   11600 node_conditions.go:105] duration metric: took 22.662818ms to run NodePressure ...
	I0813 21:05:06.916337   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:05.516343   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.517331   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.670058   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:09.675888   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:06.111044   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.608567   11447 pod_ready.go:92] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.608606   11447 pod_ready.go:81] duration metric: took 38.035378096s waiting for pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.608620   11447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.615404   11447 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.615428   11447 pod_ready.go:81] duration metric: took 6.797829ms waiting for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.615442   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.630269   11447 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.630291   11447 pod_ready.go:81] duration metric: took 14.84004ms waiting for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.630301   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.637173   11447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.637191   11447 pod_ready.go:81] duration metric: took 6.881994ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.637205   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-99cxm" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.641787   11447 pod_ready.go:92] pod "kube-proxy-99cxm" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.641806   11447 pod_ready.go:81] duration metric: took 4.592412ms waiting for pod "kube-proxy-99cxm" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.641816   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:08.006732   11447 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:08.006761   11447 pod_ready.go:81] duration metric: took 364.934714ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:08.006777   11447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:10.416206   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.404648   11600 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:05:07.414912   11600 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0813 21:05:07.708787   11600 retry.go:31] will retry after 540.190908ms: kubelet not initialised
	I0813 21:05:08.256390   11600 kubeadm.go:746] kubelet initialised
	I0813 21:05:08.256419   11600 kubeadm.go:747] duration metric: took 851.738381ms waiting for restarted kubelet to initialise ...
	I0813 21:05:08.256432   11600 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:05:08.265413   11600 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-wqktx" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:10.372610   11600 pod_ready.go:102] pod "coredns-78fcd69978-wqktx" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:10.016406   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.513411   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.171097   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:14.667560   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.416520   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:14.917152   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.791126   11600 pod_ready.go:102] pod "coredns-78fcd69978-wqktx" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:15.296951   11600 pod_ready.go:92] pod "coredns-78fcd69978-wqktx" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:15.296981   11600 pod_ready.go:81] duration metric: took 7.031537534s waiting for pod "coredns-78fcd69978-wqktx" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:15.296992   11600 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:14.513966   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:16.518250   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:16.669467   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:18.670323   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:16.956540   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:19.413311   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:17.316436   11600 pod_ready.go:102] pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:17.817195   11600 pod_ready.go:92] pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:17.817242   11600 pod_ready.go:81] duration metric: took 2.520242337s waiting for pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:17.817255   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:17.825965   11600 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:17.825988   11600 pod_ready.go:81] duration metric: took 8.722511ms waiting for pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:17.826001   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:19.873713   11600 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:19.011904   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:21.016678   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:23.516661   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:21.171346   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:23.667746   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:21.422135   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:23.915750   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:22.369972   11600 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:22.370008   11600 pod_ready.go:81] duration metric: took 4.543995238s waiting for pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.370023   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vvkkd" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.377665   11600 pod_ready.go:92] pod "kube-proxy-vvkkd" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:22.377685   11600 pod_ready.go:81] duration metric: took 7.65301ms waiting for pod "kube-proxy-vvkkd" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.377696   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.385096   11600 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:22.385113   11600 pod_ready.go:81] duration metric: took 7.408599ms waiting for pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.385121   11600 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:24.402382   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:26.901061   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:26.018949   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.513145   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:25.668326   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.186367   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:26.415525   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.913863   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.902947   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.903048   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.516874   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:33.011959   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.666530   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:32.666799   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:34.668707   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.915376   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:33.415440   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:35.415962   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:33.403872   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:35.902644   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:35.014820   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:37.015893   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:37.169496   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:39.170551   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:37.918334   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:40.414297   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:38.408969   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:40.903397   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:39.017723   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:41.512620   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:43.513209   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:41.171007   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:43.668192   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:42.915720   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.423660   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:43.403450   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.445034   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.515122   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:48.013001   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.669651   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:48.167953   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:50.171552   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:47.916795   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:49.916975   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:47.904497   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:50.399990   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:50.512153   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.512918   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.174821   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.670257   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.414652   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.415677   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.404181   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.904430   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.515153   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:57.013806   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:57.168792   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:59.666912   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:56.416201   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:58.917986   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:57.401016   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:59.404016   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.906289   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:59.512815   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.514140   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.668491   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:03.668678   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.413828   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:03.414479   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:04.403957   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:06.901856   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:04.012166   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:06.013309   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.512931   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:06.168995   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.667450   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:05.918408   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.416404   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:10.416808   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.903609   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:11.405857   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:11.014642   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:13.512706   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:10.669910   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:13.170072   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:12.919893   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:15.417469   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:13.901800   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:16.402802   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:15.514827   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:18.012928   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:15.668033   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:17.668913   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.167322   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:17.914829   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.413984   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:18.405532   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.902412   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.017907   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.514292   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.170177   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:24.668943   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.416213   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:24.922905   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.902968   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:25.401882   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:25.067645   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.519637   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.167658   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:29.168133   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.413791   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:29.414145   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.402765   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:29.403392   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:31.900702   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:30.012069   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:32.014177   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:31.169296   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.160326   10272 pod_ready.go:81] duration metric: took 4m0.399801158s waiting for pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace to be "Ready" ...
	E0813 21:06:33.160356   10272 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:06:33.160383   10272 pod_ready.go:38] duration metric: took 4m1.6003819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:06:33.160416   10272 kubeadm.go:604] restartCluster took 4m59.137608004s
	W0813 21:06:33.160600   10272 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:06:33.160640   10272 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:06:31.419127   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.918800   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.903797   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:36.401884   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:34.015031   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:36.513631   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:36.414485   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:38.415451   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:40.416420   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:38.900640   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:40.901483   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:39.011809   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:41.013908   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:43.513605   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:42.920201   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:45.415258   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:42.905257   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:44.905610   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:45.514466   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:47.515852   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:47.415484   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:49.415708   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:47.414520   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:49.903972   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:49.517251   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:52.012858   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:51.918221   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:53.918831   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:52.402393   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:54.902136   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:54.513409   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:57.012531   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:00.392100   10272 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.231434099s)
	I0813 21:07:00.392193   10272 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:07:00.406886   10272 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:07:00.406959   10272 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:07:00.442137   10272 cri.go:76] found id: ""
	I0813 21:07:00.442208   10272 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:07:00.449499   10272 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:07:00.458330   10272 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:07:00.458372   10272 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0813 21:06:55.923186   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:58.413947   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:00.414960   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:57.401732   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:59.404622   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:01.901431   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:01.146030   10272 out.go:204]   - Generating certificates and keys ...
	I0813 21:06:59.013910   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:01.514845   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:02.514874   10272 out.go:204]   - Booting up control plane ...
	I0813 21:07:02.420421   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:04.921161   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:03.901922   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:06.400821   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:04.017697   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:06.512767   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:07.415160   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:09.916408   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:08.402752   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:10.903350   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:09.011421   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:11.015678   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:13.515855   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:14.594414   10272 out.go:204]   - Configuring RBAC rules ...
	I0813 21:07:15.029321   10272 cni.go:93] Creating CNI manager for ""
	I0813 21:07:15.029346   10272 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:07:15.031000   10272 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:07:15.031061   10272 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:07:15.039108   10272 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:07:15.058649   10272 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:07:15.058707   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:15.058717   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=old-k8s-version-20210813205823-30853 minikube.k8s.io/updated_at=2021_08_13T21_07_15_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:15.095343   10272 ops.go:34] apiserver oom_adj: 16
	I0813 21:07:15.095372   10272 ops.go:39] adjusting apiserver oom_adj to -10
	I0813 21:07:15.095386   10272 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:07:15.330590   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:12.413115   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:14.414512   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:13.400030   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:15.403757   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:15.505147   10867 pod_ready.go:81] duration metric: took 4m0.402080118s waiting for pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace to be "Ready" ...
	E0813 21:07:15.505169   10867 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:07:15.505190   10867 pod_ready.go:38] duration metric: took 4m39.330917946s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:07:15.505243   10867 kubeadm.go:604] restartCluster took 5m2.104930788s
	W0813 21:07:15.505419   10867 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:07:15.505453   10867 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:07:15.931748   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:16.430811   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:16.930834   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:17.430845   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:17.930776   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:18.431732   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:18.930812   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:19.431647   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:19.931099   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:20.431444   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:16.414885   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:18.422404   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:17.901988   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:20.403379   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:20.930893   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:21.430961   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:21.931774   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:22.431310   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:22.931068   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:23.431314   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:23.931570   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:24.431290   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:24.931320   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:25.431531   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:20.914560   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:22.914642   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:24.916586   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:22.902451   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:24.903333   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:25.931646   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:26.431685   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:26.931719   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:27.431409   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:27.930888   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:28.431524   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:28.931535   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:29.431073   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:29.931502   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:30.430962   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:26.919653   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:29.418420   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:30.543916   10272 kubeadm.go:985] duration metric: took 15.48526077s to wait for elevateKubeSystemPrivileges.
	I0813 21:07:30.543949   10272 kubeadm.go:392] StartCluster complete in 5m56.564780701s
	I0813 21:07:30.543981   10272 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:07:30.544141   10272 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:07:30.545813   10272 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:07:31.081760   10272 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210813205823-30853" rescaled to 1
	I0813 21:07:31.081820   10272 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.83.49 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0813 21:07:31.083916   10272 out.go:177] * Verifying Kubernetes components...
	I0813 21:07:31.083983   10272 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:07:31.081886   10272 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:07:31.081888   10272 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:07:31.084080   10272 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084084   10272 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084099   10272 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.082132   10272 config.go:177] Loaded profile config "old-k8s-version-20210813205823-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	W0813 21:07:31.084108   10272 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:07:31.084120   10272 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084134   10272 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084143   10272 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084151   10272 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210813205823-30853"
	W0813 21:07:31.084158   10272 addons.go:147] addon metrics-server should already be in state true
	I0813 21:07:31.084100   10272 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210813205823-30853"
	W0813 21:07:31.084168   10272 addons.go:147] addon dashboard should already be in state true
	I0813 21:07:31.084183   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.084189   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.084158   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.084631   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084632   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084685   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.084687   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.084751   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084792   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.084631   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084865   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.105064   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42647
	I0813 21:07:31.105078   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35401
	I0813 21:07:31.105589   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.105724   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.105733   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0813 21:07:31.105826   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0813 21:07:31.106201   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.106225   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.106288   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.106388   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.106410   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.106656   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.106795   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.106823   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.106845   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.106940   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.107274   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.107310   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.107372   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.107393   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.107505   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.107679   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.107914   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.108023   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.108066   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.108456   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.108502   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.121147   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0813 21:07:31.120919   10272 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210813205823-30853"
	W0813 21:07:31.121411   10272 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:07:31.121457   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.121491   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45327
	I0813 21:07:31.121993   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.122297   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.122764   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.123195   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.123739   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.123763   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.123790   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.123822   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.124154   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.124287   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.124315   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.124496   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.128429   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:31.130930   10272 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:07:31.129602   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:31.130875   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45195
	I0813 21:07:31.132382   10272 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:07:31.132436   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:07:31.132451   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:07:31.132474   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.134119   10272 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:07:31.134224   10272 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:07:31.134241   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:07:31.134259   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.132855   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.135094   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.135114   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.135252   10272 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210813205823-30853" to be "Ready" ...
	I0813 21:07:31.135886   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.136518   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.140126   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.140398   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:27.404366   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:29.901079   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:31.902091   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:31.142209   10272 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:07:31.142270   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:07:31.140792   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.142282   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:07:31.140956   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.142313   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.142015   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.142337   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.142480   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.142494   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.142517   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.142738   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.142977   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.143006   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.143155   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.143333   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.143530   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.143544   10272 node_ready.go:49] node "old-k8s-version-20210813205823-30853" has status "Ready":"True"
	I0813 21:07:31.143557   10272 node_ready.go:38] duration metric: took 8.284522ms waiting for node "old-k8s-version-20210813205823-30853" to be "Ready" ...
	I0813 21:07:31.143568   10272 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:07:31.145891   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36815
	I0813 21:07:31.146234   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.146769   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.146792   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.147190   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.147843   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.147892   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.148364   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.148819   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.148848   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.148994   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.149157   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.149288   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.149464   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.154492   10272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:31.159199   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0813 21:07:31.159608   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.160083   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.160107   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.160442   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.160628   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.163581   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:31.163764   10272 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:07:31.163780   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:07:31.163796   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.169112   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.169507   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.169535   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.169656   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.169820   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.170004   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.170153   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.334616   10272 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:07:31.339091   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:07:31.350144   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:07:31.350160   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:07:31.366866   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:07:31.366889   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:07:31.415434   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:07:31.415460   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:07:31.415813   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:07:31.439763   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:07:31.439787   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:07:31.551531   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:07:31.551559   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:07:31.614721   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:07:31.614757   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:07:31.648730   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:07:31.686266   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:07:31.686288   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:07:31.766323   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:07:31.766354   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:07:32.021208   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:07:32.021232   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:07:32.128868   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:07:32.128914   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:07:32.396755   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:07:32.396784   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:07:32.629623   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:07:32.629647   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:07:32.876963   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:07:33.170819   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.554610   10272 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.219955078s)
	I0813 21:07:33.554661   10272 start.go:728] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS
	I0813 21:07:33.554710   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.215586915s)
	I0813 21:07:33.554766   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.138920482s)
	I0813 21:07:33.554845   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.554810   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.554909   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.554882   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.555205   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.555224   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.555237   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.555251   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.555322   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.555339   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.555337   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.555352   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.555362   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.557880   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.557881   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.557894   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.557900   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.557931   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.557951   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.557969   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.558002   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.558255   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.558287   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.558297   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.417993   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.769219397s)
	I0813 21:07:34.418041   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.418055   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.419702   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.419703   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:34.419721   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.419735   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.419744   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.420013   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.420030   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.420042   10272 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210813205823-30853"
	I0813 21:07:34.719323   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.842300346s)
	I0813 21:07:34.719378   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.719393   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.719692   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.719710   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.719720   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.719731   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.721171   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.721190   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.721177   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:34.723692   10272 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 21:07:34.723719   10272 addons.go:344] enableAddons completed in 3.64184317s
	I0813 21:07:31.421963   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.916790   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.903029   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:36.402184   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:35.688121   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.171925   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:36.422423   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.916463   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.403153   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.903100   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.668346   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:42.668696   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:44.669555   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.922382   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:42.982831   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:45.413525   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:43.402566   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:45.905536   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:46.733235   10867 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.227754709s)
	I0813 21:07:46.733320   10867 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:07:46.749380   10867 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:07:46.749451   10867 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:07:46.789090   10867 cri.go:76] found id: ""
	I0813 21:07:46.789192   10867 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:07:46.797753   10867 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:07:46.805773   10867 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:07:46.805816   10867 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:07:47.366092   10867 out.go:204]   - Generating certificates and keys ...
	I0813 21:07:48.287070   10867 out.go:204]   - Booting up control plane ...
	I0813 21:07:46.669635   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:49.169303   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:47.414190   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:49.914581   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:48.403863   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:50.902452   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:51.170024   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:53.672034   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:52.419570   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:54.922828   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:53.400843   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:55.401813   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:56.169442   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:58.173990   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:00.180299   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:57.414460   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:59.414953   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:57.402188   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:59.407382   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:01.902586   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:02.672361   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:05.168918   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:04.917732   10867 out.go:204]   - Configuring RBAC rules ...
	I0813 21:08:05.478215   10867 cni.go:93] Creating CNI manager for ""
	I0813 21:08:05.478240   10867 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:08:01.415978   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:03.916377   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:03.903277   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:05.908821   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:05.480079   10867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:08:05.480166   10867 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:08:05.490836   10867 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:08:05.516775   10867 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:08:05.516826   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=embed-certs-20210813205917-30853 minikube.k8s.io/updated_at=2021_08_13T21_08_05_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:05.516826   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:05.571274   10867 ops.go:34] apiserver oom_adj: -16
	I0813 21:08:05.877007   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:06.498456   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:06.997686   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:07.498266   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:07.998377   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:08.498124   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:07.171495   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.171976   10272 pod_ready.go:92] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:08.172005   10272 pod_ready.go:81] duration metric: took 37.017483324s waiting for pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:08.172023   10272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xnqfc" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:08.178546   10272 pod_ready.go:92] pod "kube-proxy-xnqfc" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:08.178572   10272 pod_ready.go:81] duration metric: took 6.540181ms waiting for pod "kube-proxy-xnqfc" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:08.178582   10272 pod_ready.go:38] duration metric: took 37.035002251s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:08:08.178607   10272 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:08:08.178659   10272 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:08:08.193211   10272 api_server.go:70] duration metric: took 37.111356956s to wait for apiserver process to appear ...
	I0813 21:08:08.193234   10272 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:08:08.193245   10272 api_server.go:239] Checking apiserver healthz at https://192.168.83.49:8443/healthz ...
	I0813 21:08:08.200770   10272 api_server.go:265] https://192.168.83.49:8443/healthz returned 200:
	ok
	I0813 21:08:08.201945   10272 api_server.go:139] control plane version: v1.14.0
	I0813 21:08:08.201960   10272 api_server.go:129] duration metric: took 8.721341ms to wait for apiserver health ...
	I0813 21:08:08.201968   10272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:08:08.206023   10272 system_pods.go:59] 4 kube-system pods found
	I0813 21:08:08.206043   10272 system_pods.go:61] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.206047   10272 system_pods.go:61] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.206054   10272 system_pods.go:61] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.206058   10272 system_pods.go:61] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.206065   10272 system_pods.go:74] duration metric: took 4.091873ms to wait for pod list to return data ...
	I0813 21:08:08.206072   10272 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:08:08.209997   10272 default_sa.go:45] found service account: "default"
	I0813 21:08:08.210015   10272 default_sa.go:55] duration metric: took 3.938001ms for default service account to be created ...
	I0813 21:08:08.210022   10272 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:08:08.214317   10272 system_pods.go:86] 4 kube-system pods found
	I0813 21:08:08.214336   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.214341   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.214348   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.214354   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.214373   10272 retry.go:31] will retry after 214.282984ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:08.433733   10272 system_pods.go:86] 4 kube-system pods found
	I0813 21:08:08.433762   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.433770   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.433781   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.433788   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.433807   10272 retry.go:31] will retry after 293.852686ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:08.735301   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:08.735333   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.735341   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.735350   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:08.735360   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.735366   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.735412   10272 retry.go:31] will retry after 355.089487ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:09.097711   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:09.097745   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.097753   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.097758   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:09.097765   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:09.097770   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.097788   10272 retry.go:31] will retry after 480.685997ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:09.584281   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:09.584311   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.584317   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.584321   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:09.584329   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:09.584333   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.584352   10272 retry.go:31] will retry after 544.138839ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:10.134667   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:10.134694   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.134701   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.134706   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:10.134712   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:10.134716   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.134738   10272 retry.go:31] will retry after 684.014074ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:05.922361   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.419726   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.401315   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:10.909126   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.998041   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:09.498515   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:09.998297   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:10.498018   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:10.997716   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:11.497679   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:11.998238   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:12.498701   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:12.997887   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:13.498358   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:10.825951   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:10.825981   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.825987   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:10.825991   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.825995   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:10.826001   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:10.826006   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.826027   10272 retry.go:31] will retry after 1.039068543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:11.871229   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:11.871263   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:11.871270   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:11.871274   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:11.871279   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:11.871292   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:11.871300   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:11.871321   10272 retry.go:31] will retry after 1.02433744s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:12.905014   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:12.905044   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905052   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905058   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905065   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905075   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:12.905081   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905105   10272 retry.go:31] will retry after 1.268973106s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:14.179146   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:14.179173   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179179   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179183   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179188   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179195   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:14.179202   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179223   10272 retry.go:31] will retry after 1.733071555s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:10.914496   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:12.924919   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:15.415784   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:13.401246   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:15.408120   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:13.997632   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:14.497943   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:14.998249   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:15.498543   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:15.998283   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:16.497729   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:16.997873   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:17.497972   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:17.997958   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:18.497761   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:18.997883   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:19.220539   10867 kubeadm.go:985] duration metric: took 13.703767036s to wait for elevateKubeSystemPrivileges.
	I0813 21:08:19.220607   10867 kubeadm.go:392] StartCluster complete in 6m5.865041156s
	I0813 21:08:19.220635   10867 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:08:19.220787   10867 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:08:19.223909   10867 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:08:19.752954   10867 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210813205917-30853" rescaled to 1
	I0813 21:08:19.753018   10867 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:08:19.754708   10867 out.go:177] * Verifying Kubernetes components...
	I0813 21:08:19.754778   10867 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:08:19.753082   10867 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:08:19.753107   10867 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:08:19.753299   10867 config.go:177] Loaded profile config "embed-certs-20210813205917-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:08:19.754891   10867 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754904   10867 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754933   10867 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210813205917-30853"
	I0813 21:08:19.754932   10867 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754940   10867 addons.go:59] Setting dashboard=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754970   10867 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210813205917-30853"
	I0813 21:08:19.754974   10867 addons.go:135] Setting addon dashboard=true in "embed-certs-20210813205917-30853"
	W0813 21:08:19.754988   10867 addons.go:147] addon dashboard should already be in state true
	W0813 21:08:19.754987   10867 addons.go:147] addon metrics-server should already be in state true
	I0813 21:08:19.755026   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.754914   10867 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210813205917-30853"
	W0813 21:08:19.755116   10867 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:08:19.755134   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.755026   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.755462   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755511   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.755539   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755462   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755571   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.755606   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.755637   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755686   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.770580   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42725
	I0813 21:08:19.771121   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.771377   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33335
	I0813 21:08:19.771830   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.771853   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.771954   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.772247   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.772723   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.772739   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.772901   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0813 21:08:19.773026   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.773068   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.773413   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.773902   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.773924   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.774397   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.774463   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.774563   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.775023   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.775063   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.784550   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33743
	I0813 21:08:19.784959   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.785506   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.785522   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.785894   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.786493   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.786525   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.787205   10867 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210813205917-30853"
	W0813 21:08:19.787228   10867 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:08:19.787259   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.787583   10867 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210813205917-30853" to be "Ready" ...
	I0813 21:08:19.787674   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.787718   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.787787   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0813 21:08:19.787910   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0813 21:08:19.788204   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.789084   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.789106   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.789211   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.789825   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.789931   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.789953   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.790005   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.790276   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.790437   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.794978   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.794986   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.797284   10867 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:08:19.798757   10867 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:08:19.797345   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:08:19.798798   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:08:19.798822   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.800334   10867 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:08:19.800389   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:08:19.800399   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:08:19.799838   10867 node_ready.go:49] node "embed-certs-20210813205917-30853" has status "Ready":"True"
	I0813 21:08:19.800420   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.800422   10867 node_ready.go:38] duration metric: took 12.815275ms waiting for node "embed-certs-20210813205917-30853" to be "Ready" ...
	I0813 21:08:19.800442   10867 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:08:19.802028   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35565
	I0813 21:08:19.802460   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.802983   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.803025   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.803483   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.803731   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.809104   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.809531   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.809654   10867 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:15.917751   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:15.917783   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917792   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917799   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917805   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917816   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:15.917823   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917844   10272 retry.go:31] will retry after 2.410580953s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:18.337846   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:18.337883   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337892   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337898   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337905   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337916   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:18.337923   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337944   10272 retry.go:31] will retry after 3.437877504s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:17.916739   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:20.415225   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:17.901469   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:19.902763   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:21.903648   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:19.811430   10867 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:08:19.810007   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.811541   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.811578   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.811581   10867 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:08:19.810168   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.810293   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.810559   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.811047   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36555
	I0813 21:08:19.811649   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.811674   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:08:19.811689   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.811908   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.811910   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.812038   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.812038   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.812443   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.812464   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:19.812475   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:19.813065   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.813083   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.813470   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.814035   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.814070   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.818289   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.818751   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.818811   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.818838   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.818903   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.819054   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.819209   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:19.825837   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0813 21:08:19.826199   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.826605   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.826624   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.826952   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.827127   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.830318   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.830538   10867 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:08:19.830553   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:08:19.830570   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.835761   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.836143   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.836172   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.836286   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.836451   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.836602   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.836724   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:20.037292   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:08:20.037321   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:08:20.099263   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:08:20.099292   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:08:20.117736   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:08:20.146467   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:08:20.146494   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:08:20.148636   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:08:20.180430   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:08:20.180464   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:08:20.300161   10867 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:08:20.301107   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:08:20.301131   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:08:20.311540   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:08:20.311565   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:08:20.390587   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:08:20.390623   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:08:20.411556   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:08:20.513347   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:08:20.513381   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:08:20.562665   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:08:20.562692   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:08:20.637151   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:08:20.637186   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:08:20.697238   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:08:20.697266   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:08:20.722593   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:08:20.722622   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:08:20.888939   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:08:21.832691   10867 pod_ready.go:102] pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:22.499631   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.381850453s)
	I0813 21:08:22.499694   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.499708   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.499992   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.500011   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.500021   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.500031   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.500251   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.500299   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.500317   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.500327   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.500578   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.500587   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | Closing plugin on server side
	I0813 21:08:22.500601   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.607350   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.458674806s)
	I0813 21:08:22.607409   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.607423   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.607684   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.607702   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.607713   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.607728   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.607970   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.607987   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.671948   10867 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.371722218s)
	I0813 21:08:22.671991   10867 start.go:728] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 21:08:23.212733   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.801121223s)
	I0813 21:08:23.212785   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:23.212801   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:23.213078   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | Closing plugin on server side
	I0813 21:08:23.213122   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:23.213131   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:23.213147   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:23.213162   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:23.213417   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | Closing plugin on server side
	I0813 21:08:23.213454   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:23.213463   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:23.213476   10867 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210813205917-30853"
	I0813 21:08:23.973313   10867 pod_ready.go:102] pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.127694   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.238655669s)
	I0813 21:08:24.127768   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:24.127783   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:24.128088   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:24.128134   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:24.128152   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:24.128162   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:24.128402   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:24.128416   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:21.783186   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:21.783216   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783222   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783226   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783231   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783238   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:21.783242   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783260   10272 retry.go:31] will retry after 3.261655801s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:25.051995   10272 system_pods.go:86] 7 kube-system pods found
	I0813 21:08:25.052028   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052037   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052051   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:25.052058   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052065   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052076   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:25.052086   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052104   10272 retry.go:31] will retry after 4.086092664s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:22.421981   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.915565   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:23.903699   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:25.903987   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.130282   10867 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 21:08:24.130308   10867 addons.go:344] enableAddons completed in 4.377209962s
	I0813 21:08:26.342246   10867 pod_ready.go:92] pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:26.342272   10867 pod_ready.go:81] duration metric: took 6.532595189s waiting for pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:26.342282   10867 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:28.367486   10867 pod_ready.go:102] pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:29.149965   10272 system_pods.go:86] 7 kube-system pods found
	I0813 21:08:29.149997   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150006   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150013   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:29.150019   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150025   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150035   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:29.150043   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150063   10272 retry.go:31] will retry after 6.402197611s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:26.928284   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:29.416662   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:28.403505   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:30.906239   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:30.367630   10867 pod_ready.go:102] pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:31.386002   10867 pod_ready.go:97] error getting pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-zdlnb" not found
	I0813 21:08:31.386040   10867 pod_ready.go:81] duration metric: took 5.043748322s waiting for pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace to be "Ready" ...
	E0813 21:08:31.386053   10867 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-zdlnb" not found
	I0813 21:08:31.386063   10867 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.395413   10867 pod_ready.go:92] pod "etcd-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.395442   10867 pod_ready.go:81] duration metric: took 9.37037ms waiting for pod "etcd-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.395456   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.407839   10867 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.407860   10867 pod_ready.go:81] duration metric: took 12.39509ms waiting for pod "kube-apiserver-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.407872   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.413811   10867 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.413832   10867 pod_ready.go:81] duration metric: took 5.950273ms waiting for pod "kube-controller-manager-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.413845   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-szvqm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.422794   10867 pod_ready.go:92] pod "kube-proxy-szvqm" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.422819   10867 pod_ready.go:81] duration metric: took 8.966458ms waiting for pod "kube-proxy-szvqm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.422831   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.564060   10867 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.564136   10867 pod_ready.go:81] duration metric: took 141.29321ms waiting for pod "kube-scheduler-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.564168   10867 pod_ready.go:38] duration metric: took 11.763707327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:08:31.564208   10867 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:08:31.564290   10867 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:08:31.578890   10867 api_server.go:70] duration metric: took 11.8258395s to wait for apiserver process to appear ...
	I0813 21:08:31.578919   10867 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:08:31.578932   10867 api_server.go:239] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0813 21:08:31.585647   10867 api_server.go:265] https://192.168.39.156:8443/healthz returned 200:
	ok
	I0813 21:08:31.586833   10867 api_server.go:139] control plane version: v1.21.3
	I0813 21:08:31.586868   10867 api_server.go:129] duration metric: took 7.925906ms to wait for apiserver health ...
	I0813 21:08:31.586879   10867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:08:31.766375   10867 system_pods.go:59] 8 kube-system pods found
	I0813 21:08:31.766406   10867 system_pods.go:61] "coredns-558bd4d5db-8bmrm" [23a5740e-bd96-4bd0-851e-4abc81b7ddff] Running
	I0813 21:08:31.766412   10867 system_pods.go:61] "etcd-embed-certs-20210813205917-30853" [7061779a-83ef-4ed4-9512-ec936a2d94d1] Running
	I0813 21:08:31.766416   10867 system_pods.go:61] "kube-apiserver-embed-certs-20210813205917-30853" [796645fb-0142-415b-96c2-9b640f680514] Running
	I0813 21:08:31.766421   10867 system_pods.go:61] "kube-controller-manager-embed-certs-20210813205917-30853" [d17159ee-4ac6-4f2a-aaad-cd3af7317e02] Running
	I0813 21:08:31.766424   10867 system_pods.go:61] "kube-proxy-szvqm" [d116fa9a-0229-40cf-ae60-5d89fb7716f1] Running
	I0813 21:08:31.766428   10867 system_pods.go:61] "kube-scheduler-embed-certs-20210813205917-30853" [b888e2ad-9504-4e54-8156-8d30bb432d37] Running
	I0813 21:08:31.766436   10867 system_pods.go:61] "metrics-server-7c784ccb57-qc7sb" [43aa1ab2-5284-4d76-b826-12fd50a0ba54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:31.766440   10867 system_pods.go:61] "storage-provisioner" [f70d6e8f-2aca-49ac-913a-73ddf71ae6ee] Running
	I0813 21:08:31.766447   10867 system_pods.go:74] duration metric: took 179.562479ms to wait for pod list to return data ...
	I0813 21:08:31.766456   10867 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:08:31.964873   10867 default_sa.go:45] found service account: "default"
	I0813 21:08:31.964899   10867 default_sa.go:55] duration metric: took 198.43488ms for default service account to be created ...
	I0813 21:08:31.964911   10867 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:08:32.168305   10867 system_pods.go:86] 8 kube-system pods found
	I0813 21:08:32.168349   10867 system_pods.go:89] "coredns-558bd4d5db-8bmrm" [23a5740e-bd96-4bd0-851e-4abc81b7ddff] Running
	I0813 21:08:32.168359   10867 system_pods.go:89] "etcd-embed-certs-20210813205917-30853" [7061779a-83ef-4ed4-9512-ec936a2d94d1] Running
	I0813 21:08:32.168369   10867 system_pods.go:89] "kube-apiserver-embed-certs-20210813205917-30853" [796645fb-0142-415b-96c2-9b640f680514] Running
	I0813 21:08:32.168377   10867 system_pods.go:89] "kube-controller-manager-embed-certs-20210813205917-30853" [d17159ee-4ac6-4f2a-aaad-cd3af7317e02] Running
	I0813 21:08:32.168384   10867 system_pods.go:89] "kube-proxy-szvqm" [d116fa9a-0229-40cf-ae60-5d89fb7716f1] Running
	I0813 21:08:32.168390   10867 system_pods.go:89] "kube-scheduler-embed-certs-20210813205917-30853" [b888e2ad-9504-4e54-8156-8d30bb432d37] Running
	I0813 21:08:32.168402   10867 system_pods.go:89] "metrics-server-7c784ccb57-qc7sb" [43aa1ab2-5284-4d76-b826-12fd50a0ba54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:32.168412   10867 system_pods.go:89] "storage-provisioner" [f70d6e8f-2aca-49ac-913a-73ddf71ae6ee] Running
	I0813 21:08:32.168423   10867 system_pods.go:126] duration metric: took 203.506299ms to wait for k8s-apps to be running ...
	I0813 21:08:32.168436   10867 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:08:32.168487   10867 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:08:32.183556   10867 system_svc.go:56] duration metric: took 15.110742ms WaitForService to wait for kubelet.
	I0813 21:08:32.183585   10867 kubeadm.go:547] duration metric: took 12.430541017s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:08:32.183611   10867 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:08:32.366938   10867 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:08:32.366970   10867 node_conditions.go:123] node cpu capacity is 2
	I0813 21:08:32.366989   10867 node_conditions.go:105] duration metric: took 183.372537ms to run NodePressure ...
	I0813 21:08:32.367004   10867 start.go:231] waiting for startup goroutines ...
	I0813 21:08:32.428402   10867 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 21:08:32.430754   10867 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210813205917-30853" cluster and "default" namespace by default
	I0813 21:08:31.925048   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:34.421689   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:33.402937   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:35.404185   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:35.559235   10272 system_pods.go:86] 7 kube-system pods found
	I0813 21:08:35.559264   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559272   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559278   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559284   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559289   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559299   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:35.559305   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559325   10272 retry.go:31] will retry after 6.062999549s: missing components: kube-controller-manager
	I0813 21:08:36.917628   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:39.412918   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:37.902004   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:40.400508   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:41.627792   10272 system_pods.go:86] 8 kube-system pods found
	I0813 21:08:41.627828   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627837   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627844   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627851   10272 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205823-30853" [9f80b2c3-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:41.627857   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627863   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627874   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:41.627882   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627906   10272 retry.go:31] will retry after 10.504197539s: missing components: kube-controller-manager
	I0813 21:08:41.415467   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:43.418679   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:45.419622   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:42.401588   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:44.413733   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:46.903773   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:47.914837   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:50.413949   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:52.140470   10272 system_pods.go:86] 8 kube-system pods found
	I0813 21:08:52.140498   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140503   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140508   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140512   10272 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205823-30853" [9f80b2c3-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140516   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140520   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140526   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:52.140531   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140549   10272 system_pods.go:126] duration metric: took 43.930520866s to wait for k8s-apps to be running ...
	I0813 21:08:52.140578   10272 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:08:52.140627   10272 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:08:52.153255   10272 system_svc.go:56] duration metric: took 12.668182ms WaitForService to wait for kubelet.
	I0813 21:08:52.153279   10272 kubeadm.go:547] duration metric: took 1m21.071431976s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:08:52.153300   10272 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:08:52.156915   10272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:08:52.156939   10272 node_conditions.go:123] node cpu capacity is 2
	I0813 21:08:52.156953   10272 node_conditions.go:105] duration metric: took 3.648615ms to run NodePressure ...
	I0813 21:08:52.156962   10272 start.go:231] waiting for startup goroutines ...
	I0813 21:08:52.202043   10272 start.go:462] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I0813 21:08:52.204217   10272 out.go:177] 
	W0813 21:08:52.204388   10272 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I0813 21:08:52.206057   10272 out.go:177]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:08:52.207407   10272 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20210813205823-30853" cluster and "default" namespace by default
	I0813 21:08:48.904448   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:51.401687   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:52.414001   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:54.916108   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:53.903280   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:56.402202   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:56.918707   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:59.414767   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:58.402828   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:00.404574   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:01.415921   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:03.415961   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:05.418118   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 21:01:11 UTC, end at Fri 2021-08-13 21:09:06 UTC. --
	Aug 13 21:09:05 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:05.317847380Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="go-grpc-middleware/chain.go:25" id=ae67939c-19e3-4729-9e68-d7a75a206f8d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.046329233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e331f9d3-ce81-4aeb-b6dd-fb550c1b1489 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.046394157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e331f9d3-ce81-4aeb-b6dd-fb550c1b1489 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.046853161Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e331f9d3-ce81-4aeb-b6dd-fb550c1b1489 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.083737748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3f5acdc2-2dbc-4519-b7f2-659090d96e77 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.083795155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3f5acdc2-2dbc-4519-b7f2-659090d96e77 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.083997763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3f5acdc2-2dbc-4519-b7f2-659090d96e77 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.120755031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a8276607-93aa-4daf-9182-6065c6146b35 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.120813369Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a8276607-93aa-4daf-9182-6065c6146b35 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.121051171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a8276607-93aa-4daf-9182-6065c6146b35 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.166012234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f6d8b370-1651-4824-88f8-2ab8190e02fd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.166195954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f6d8b370-1651-4824-88f8-2ab8190e02fd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.166517581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f6d8b370-1651-4824-88f8-2ab8190e02fd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.207416716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3ca61db6-19b9-4bf2-bb26-13b527e0efdb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.207477239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3ca61db6-19b9-4bf2-bb26-13b527e0efdb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.207764091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3ca61db6-19b9-4bf2-bb26-13b527e0efdb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.248610809Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f388039f-d47f-4b36-bb71-5129e2e4d9f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.248936727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f388039f-d47f-4b36-bb71-5129e2e4d9f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.249237414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f388039f-d47f-4b36-bb71-5129e2e4d9f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.287972454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1b71f9eb-ebd0-4dc1-a522-439d86a4e0c1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.288032003Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1b71f9eb-ebd0-4dc1-a522-439d86a4e0c1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.288250629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1b71f9eb-ebd0-4dc1-a522-439d86a4e0c1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.329054907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0ae4c2f1-764e-4903-83e6-c6227a2a09e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.329114504Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0ae4c2f1-764e-4903-83e6-c6227a2a09e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.329347388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0ae4c2f1-764e-4903-83e6-c6227a2a09e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                        ATTEMPT             POD ID
	fdedf4f5fea52       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   41 seconds ago       Exited              dashboard-metrics-scraper   3                   d95d21dd1243c
	c0825e30b45e4       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c   About a minute ago   Running             coredns                     1                   18dceec12c1d9
	5b4555812b0f6       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   About a minute ago   Running             kubernetes-dashboard        0                   cc0cd5379a478
	0fdc2c1dd8463       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Running             storage-provisioner         0                   91b18347830f0
	0c5ce365d0f35       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c   About a minute ago   Exited              coredns                     0                   18dceec12c1d9
	8ee4160af1974       5cd54e388abafbc4e1feb1050d139d718e5544494ffa55118141d6cbe4681e9d   About a minute ago   Running             kube-proxy                  0                   fe909e4ddd0b0
	974c6dadfe125       2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d   2 minutes ago        Running             etcd                        0                   f6f45c2d9b6a5
	4cfcbd86d9955       b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150   2 minutes ago        Running             kube-controller-manager     0                   a4702c1a10590
	8ba6263efe7a5       ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6   2 minutes ago        Running             kube-apiserver              0                   bf3fb59533f25
	02c918cf1c5c4       00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a4492   2 minutes ago        Running             kube-scheduler              0                   2c658baa473dd
	
	* 
	* ==> coredns [0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49] <==
	* .:53
	2021-08-13T21:07:37.978Z [INFO] CoreDNS-1.3.1
	2021-08-13T21:07:37.978Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T21:07:37.978Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
	E0813 21:08:02.979044       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0813 21:08:02.979044       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	log: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-fb8b8dccf-j78d5.unknownuser.log.ERROR.20210813-210802.1: no such file or directory
	
	* 
	* ==> coredns [c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4] <==
	* .:53
	2021-08-13T21:08:03.872Z [INFO] CoreDNS-1.3.1
	2021-08-13T21:08:03.872Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T21:08:03.872Z [INFO] plugin/reload: Running configuration MD5 = 6c0e799ff6797682aae95e2097dfc0d9
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210813205823-30853
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20210813205823-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=old-k8s-version-20210813205823-30853
	                    minikube.k8s.io/updated_at=2021_08_13T21_07_15_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 21:07:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 21:08:10 +0000   Fri, 13 Aug 2021 21:07:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 21:08:10 +0000   Fri, 13 Aug 2021 21:07:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 21:08:10 +0000   Fri, 13 Aug 2021 21:07:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 21:08:10 +0000   Fri, 13 Aug 2021 21:07:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.49
	  Hostname:    old-k8s-version-20210813205823-30853
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2186320Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2186320Ki
	 pods:               110
	System Info:
	 Machine ID:                 65adc67ea807433696e3e7757ea3c00d
	 System UUID:                65adc67e-a807-4336-96e3-e7757ea3c00d
	 Boot ID:                    827b3c62-a4f5-4410-bca7-56b86fb51480
	 Kernel Version:             4.19.182
	 OS Image:                   Buildroot 2020.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.20.2
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                            ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-fb8b8dccf-j78d5                                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     96s
	  kube-system                etcd-old-k8s-version-20210813205823-30853                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                kube-apiserver-old-k8s-version-20210813205823-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                kube-controller-manager-old-k8s-version-20210813205823-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                kube-proxy-xnqfc                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                kube-scheduler-old-k8s-version-20210813205823-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                metrics-server-8546d8b77b-mm6vs                                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (14%!)(MISSING)      0 (0%!)(MISSING)         91s
	  kube-system                storage-provisioner                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-2vltn                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-264rf                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                              Message
	  ----    ------                   ----                 ----                                              -------
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet, old-k8s-version-20210813205823-30853     Node old-k8s-version-20210813205823-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x7 over 2m4s)  kubelet, old-k8s-version-20210813205823-30853     Node old-k8s-version-20210813205823-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x8 over 2m4s)  kubelet, old-k8s-version-20210813205823-30853     Node old-k8s-version-20210813205823-30853 status is now: NodeHasSufficientPID
	  Normal  Starting                 94s                  kube-proxy, old-k8s-version-20210813205823-30853  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +3.794724] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.034270] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.085040] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1719 comm=systemd-network
	[  +0.654997] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.346746] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006456] vboxguest: PCI device not found, probably running on physical hardware.
	[ +16.906913] systemd-fstab-generator[2136]: Ignoring "noauto" for root device
	[  +1.901248] systemd-fstab-generator[2149]: Ignoring "noauto" for root device
	[  +0.287647] systemd-fstab-generator[2175]: Ignoring "noauto" for root device
	[  +5.784701] systemd-fstab-generator[2362]: Ignoring "noauto" for root device
	[ +14.150103] kauditd_printk_skb: 20 callbacks suppressed
	[Aug13 21:02] kauditd_printk_skb: 104 callbacks suppressed
	[  +6.217917] kauditd_printk_skb: 26 callbacks suppressed
	[Aug13 21:03] NFSD: Unable to end grace period: -110
	[Aug13 21:06] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.312838] kauditd_printk_skb: 44 callbacks suppressed
	[Aug13 21:07] systemd-fstab-generator[5828]: Ignoring "noauto" for root device
	[ +14.125056] tee (6221): /proc/6029/oom_adj is deprecated, please use /proc/6029/oom_score_adj instead.
	[ +16.773994] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.419768] kauditd_printk_skb: 134 callbacks suppressed
	[Aug13 21:08] kauditd_printk_skb: 2 callbacks suppressed
	[Aug13 21:09] systemd-fstab-generator[8008]: Ignoring "noauto" for root device
	[  +0.834273] systemd-fstab-generator[8062]: Ignoring "noauto" for root device
	[  +1.008554] systemd-fstab-generator[8114]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e] <==
	* 2021-08-13 21:07:05.866606 I | raft: newRaft f0eab59e12edad64 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2021-08-13 21:07:05.866717 I | raft: f0eab59e12edad64 became follower at term 1
	2021-08-13 21:07:05.876038 W | auth: simple token is not cryptographically signed
	2021-08-13 21:07:05.881029 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
	2021-08-13 21:07:05.882624 I | etcdserver: f0eab59e12edad64 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2021-08-13 21:07:05.883767 I | etcdserver/membership: added member f0eab59e12edad64 [https://192.168.83.49:2380] to cluster 42a6ff8259927986
	2021-08-13 21:07:05.884284 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 21:07:05.884485 I | embed: listening for metrics on http://192.168.83.49:2381
	2021-08-13 21:07:05.884967 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 21:07:05.967596 I | raft: f0eab59e12edad64 is starting a new election at term 1
	2021-08-13 21:07:05.967919 I | raft: f0eab59e12edad64 became candidate at term 2
	2021-08-13 21:07:05.968094 I | raft: f0eab59e12edad64 received MsgVoteResp from f0eab59e12edad64 at term 2
	2021-08-13 21:07:05.968113 I | raft: f0eab59e12edad64 became leader at term 2
	2021-08-13 21:07:05.968526 I | raft: raft.node: f0eab59e12edad64 elected leader f0eab59e12edad64 at term 2
	2021-08-13 21:07:05.969917 I | etcdserver: published {Name:old-k8s-version-20210813205823-30853 ClientURLs:[https://192.168.83.49:2379]} to cluster 42a6ff8259927986
	2021-08-13 21:07:05.970382 I | embed: ready to serve client requests
	2021-08-13 21:07:05.970992 I | embed: ready to serve client requests
	2021-08-13 21:07:05.972962 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 21:07:05.973600 I | etcdserver: setting up the initial cluster version to 3.3
	2021-08-13 21:07:05.975344 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-08-13 21:07:05.975529 I | etcdserver/api: enabled capabilities for version 3.3
	2021-08-13 21:07:05.976100 I | embed: serving client requests on 192.168.83.49:2379
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	2021-08-13 21:07:39.655065 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (133.924751ms) to execute
	
	* 
	* ==> kernel <==
	*  21:09:06 up 8 min,  0 users,  load average: 1.60, 1.03, 0.52
	Linux old-k8s-version-20210813205823-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8] <==
	* I0813 21:08:54.610025       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:08:55.610532       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:08:55.610972       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:08:56.611278       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:08:56.611726       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:08:57.612225       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:08:57.612330       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:08:58.612601       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:08:58.612780       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:08:59.613098       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:08:59.613454       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:00.613969       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:00.614273       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:01.614611       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:01.614870       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:02.615292       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:02.615478       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:03.615978       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:03.616396       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:04.616853       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:04.617203       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:05.617361       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:05.617566       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:06.617983       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:06.618169       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-controller-manager [4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770] <==
	* E0813 21:07:34.348237       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.348975       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"7ad85e48-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.364805       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.408369       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.409030       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.409290       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"7ae482a9-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"428", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.409507       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"7ad85e48-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.459081       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.459420       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"7ad85e48-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.459604       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.459733       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"7ae482a9-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"428", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.484871       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.484999       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"7ad85e48-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.504044       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"7ae482a9-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"428", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.504060       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.512472       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.512556       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"7ae482a9-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"428", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:35.074846       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"7a88a60e-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-mm6vs
	I0813 21:07:35.547378       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"7ad85e48-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-2vltn
	I0813 21:07:35.581263       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"7ae482a9-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-264rf
	E0813 21:08:00.263328       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 21:08:02.818481       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 21:08:30.516077       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 21:08:34.820909       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 21:09:00.769085       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db] <==
	* W0813 21:07:32.286920       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0813 21:07:32.377114       1 server_others.go:148] Using iptables Proxier.
	I0813 21:07:32.377573       1 server_others.go:178] Tearing down inactive rules.
	E0813 21:07:32.555467       1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
	I0813 21:07:32.807550       1 server.go:555] Version: v1.14.0
	I0813 21:07:32.835831       1 config.go:202] Starting service config controller
	I0813 21:07:32.835959       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0813 21:07:32.836080       1 config.go:102] Starting endpoints config controller
	I0813 21:07:32.836097       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0813 21:07:32.940892       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	I0813 21:07:32.941257       1 controller_utils.go:1034] Caches are synced for service config controller
	
	* 
	* ==> kube-scheduler [02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3] <==
	* W0813 21:07:05.469817       1 authentication.go:55] Authentication is disabled
	I0813 21:07:05.469890       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0813 21:07:05.470320       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0813 21:07:10.098259       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:07:10.109142       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:07:10.111429       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:07:10.111795       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:07:10.113885       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:07:10.114379       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:07:10.116354       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:07:10.116605       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:07:10.117913       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:07:10.122989       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:07:11.100451       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:07:11.114270       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:07:11.120553       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:07:11.125479       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:07:11.127101       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:07:11.131833       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:07:11.133225       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:07:11.135747       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:07:11.136864       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:07:11.138204       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0813 21:07:12.975783       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0813 21:07:13.076161       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:01:11 UTC, end at Fri 2021-08-13 21:09:06 UTC. --
	Aug 13 21:07:50 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:07:50.729894    5849 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:07:50 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:07:50.729930    5849 pod_workers.go:190] Error syncing pod 7b5aec25-fc7a-11eb-b132-525400ed6e80 ("metrics-server-8546d8b77b-mm6vs_kube-system(7b5aec25-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 13 21:07:51 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:07:51.030084    5849 pod_workers.go:190] Error syncing pod 7ba32335-fc7a-11eb-b132-525400ed6e80 ("dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"
	Aug 13 21:08:02 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:02.718025    5849 pod_workers.go:190] Error syncing pod 7b5aec25-fc7a-11eb-b132-525400ed6e80 ("metrics-server-8546d8b77b-mm6vs_kube-system(7b5aec25-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:08:03 old-k8s-version-20210813205823-30853 kubelet[5849]: W0813 21:08:03.286751    5849 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 21:08:04 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:04.497560    5849 pod_workers.go:190] Error syncing pod 7ba32335-fc7a-11eb-b132-525400ed6e80 ("dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"
	Aug 13 21:08:11 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:11.030497    5849 pod_workers.go:190] Error syncing pod 7ba32335-fc7a-11eb-b132-525400ed6e80 ("dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"
	Aug 13 21:08:13 old-k8s-version-20210813205823-30853 kubelet[5849]: W0813 21:08:13.328711    5849 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 21:08:16 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:16.745400    5849 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:08:16 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:16.745747    5849 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:08:16 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:16.746000    5849 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:08:16 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:16.746279    5849 pod_workers.go:190] Error syncing pod 7b5aec25-fc7a-11eb-b132-525400ed6e80 ("metrics-server-8546d8b77b-mm6vs_kube-system(7b5aec25-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 13 21:08:23 old-k8s-version-20210813205823-30853 kubelet[5849]: W0813 21:08:23.369212    5849 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 21:08:25 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:25.678468    5849 pod_workers.go:190] Error syncing pod 7ba32335-fc7a-11eb-b132-525400ed6e80 ("dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"
	Aug 13 21:08:29 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:29.717615    5849 pod_workers.go:190] Error syncing pod 7b5aec25-fc7a-11eb-b132-525400ed6e80 ("metrics-server-8546d8b77b-mm6vs_kube-system(7b5aec25-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:08:31 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:31.030197    5849 pod_workers.go:190] Error syncing pod 7ba32335-fc7a-11eb-b132-525400ed6e80 ("dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"
	Aug 13 21:08:42 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:42.717821    5849 pod_workers.go:190] Error syncing pod 7b5aec25-fc7a-11eb-b132-525400ed6e80 ("metrics-server-8546d8b77b-mm6vs_kube-system(7b5aec25-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:08:43 old-k8s-version-20210813205823-30853 kubelet[5849]: W0813 21:08:43.482580    5849 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 21:08:44 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:44.713849    5849 pod_workers.go:190] Error syncing pod 7ba32335-fc7a-11eb-b132-525400ed6e80 ("dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"
	Aug 13 21:08:53 old-k8s-version-20210813205823-30853 kubelet[5849]: W0813 21:08:53.537229    5849 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 21:08:53 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:53.715860    5849 pod_workers.go:190] Error syncing pod 7b5aec25-fc7a-11eb-b132-525400ed6e80 ("metrics-server-8546d8b77b-mm6vs_kube-system(7b5aec25-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:08:57 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:57.714068    5849 pod_workers.go:190] Error syncing pod 7ba32335-fc7a-11eb-b132-525400ed6e80 ("dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"
	Aug 13 21:09:03 old-k8s-version-20210813205823-30853 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:09:03 old-k8s-version-20210813205823-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:09:03 old-k8s-version-20210813205823-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6] <==
	* 2021/08/13 21:07:36 Starting overwatch
	2021/08/13 21:07:36 Using namespace: kubernetes-dashboard
	2021/08/13 21:07:36 Using in-cluster config to connect to apiserver
	2021/08/13 21:07:36 Using secret token for csrf signing
	2021/08/13 21:07:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:07:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:07:36 Successful initial request to the apiserver, version: v1.14.0
	2021/08/13 21:07:36 Generating JWE encryption key
	2021/08/13 21:07:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:07:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:07:36 Initializing JWE encryption key from synchronized object
	2021/08/13 21:07:36 Creating in-cluster Sidecar client
	2021/08/13 21:07:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:07:36 Serving insecurely on HTTP port: 9090
	2021/08/13 21:08:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:08:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:09:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6] <==
	* I0813 21:07:35.312584       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 21:07:35.346422       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 21:07:35.347336       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 21:07:35.368323       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 21:07:35.369471       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813205823-30853_51d1d3a6-95e9-48b0-92aa-548fed77c2e1!
	I0813 21:07:35.380231       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7a516431-fc7a-11eb-b132-525400ed6e80", APIVersion:"v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210813205823-30853_51d1d3a6-95e9-48b0-92aa-548fed77c2e1 became leader
	I0813 21:07:35.471401       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813205823-30853_51d1d3a6-95e9-48b0-92aa-548fed77c2e1!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813205823-30853 -n old-k8s-version-20210813205823-30853
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813205823-30853 -n old-k8s-version-20210813205823-30853: exit status 2 (276.117261ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20210813205823-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-8546d8b77b-mm6vs
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210813205823-30853 describe pod metrics-server-8546d8b77b-mm6vs
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210813205823-30853 describe pod metrics-server-8546d8b77b-mm6vs: exit status 1 (64.857409ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-mm6vs" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20210813205823-30853 describe pod metrics-server-8546d8b77b-mm6vs: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813205823-30853 -n old-k8s-version-20210813205823-30853
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813205823-30853 -n old-k8s-version-20210813205823-30853: exit status 2 (256.541774ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210813205823-30853 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20210813205823-30853 logs -n 25: (1.431869074s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p bridge-20210813204703-30853                    | bridge-20210813204703-30853                     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:14 UTC | Fri, 13 Aug 2021 20:59:15 UTC |
	| delete  | -p                                                | flannel-20210813204703-30853                    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:15 UTC | Fri, 13 Aug 2021 20:59:17 UTC |
	|         | flannel-20210813204703-30853                      |                                                 |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:23 UTC | Fri, 13 Aug 2021 21:00:44 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:56 UTC | Fri, 13 Aug 2021 21:00:57 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:57 UTC | Fri, 13 Aug 2021 21:01:00 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:00 UTC | Fri, 13 Aug 2021 21:01:00 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210813204600-30853         | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:01 UTC | Fri, 13 Aug 2021 21:01:02 UTC |
	|         | kubernetes-upgrade-20210813204600-30853           |                                                 |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210813210102-30853      | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:02 UTC | Fri, 13 Aug 2021 21:01:02 UTC |
	|         | disable-driver-mounts-20210813210102-30853        |                                                 |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:17 UTC | Fri, 13 Aug 2021 21:01:05 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:18 UTC | Fri, 13 Aug 2021 21:01:19 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:19 UTC | Fri, 13 Aug 2021 21:01:23 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:23 UTC | Fri, 13 Aug 2021 21:01:23 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:15 UTC | Fri, 13 Aug 2021 21:02:15 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:27 UTC | Fri, 13 Aug 2021 21:02:28 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:02 UTC | Fri, 13 Aug 2021 21:03:15 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                 |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio           |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:26 UTC | Fri, 13 Aug 2021 21:03:27 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:27 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:28 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:32 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:23 UTC | Fri, 13 Aug 2021 21:08:32 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:08:42 UTC | Fri, 13 Aug 2021 21:08:43 UTC |
	|         | embed-certs-20210813205917-30853                  |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                 |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:00 UTC | Fri, 13 Aug 2021 21:08:52 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                 |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:02 UTC | Fri, 13 Aug 2021 21:09:02 UTC |
	|         | old-k8s-version-20210813205823-30853              |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205823-30853              | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:05 UTC | Fri, 13 Aug 2021 21:09:06 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:03:32
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:03:32.257678   11600 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:03:32.257760   11600 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:03:32.257764   11600 out.go:311] Setting ErrFile to fd 2...
	I0813 21:03:32.257767   11600 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:03:32.257889   11600 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:03:32.258149   11600 out.go:305] Setting JSON to false
	I0813 21:03:32.297164   11600 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":9974,"bootTime":1628878638,"procs":184,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:03:32.297442   11600 start.go:121] virtualization: kvm guest
	I0813 21:03:32.300208   11600 out.go:177] * [no-preload-20210813205915-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:03:32.301763   11600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:03:32.300370   11600 notify.go:169] Checking for updates...
	I0813 21:03:32.303324   11600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:03:32.304875   11600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:03:32.306390   11600 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:03:32.306988   11600 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:03:32.307576   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:03:32.307638   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:03:32.319235   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34929
	I0813 21:03:32.319644   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:03:32.320320   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:03:32.320347   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:03:32.320748   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:03:32.320979   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:03:32.321189   11600 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:03:32.321646   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:03:32.321692   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:03:32.332966   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45825
	I0813 21:03:32.333332   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:03:32.333819   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:03:32.333847   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:03:32.334199   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:03:32.334372   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:03:32.365034   11600 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 21:03:32.365061   11600 start.go:278] selected driver: kvm2
	I0813 21:03:32.365067   11600 start.go:751] validating driver "kvm2" against &{Name:no-preload-20210813205915-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.107 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:03:32.365197   11600 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:03:32.367047   11600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.367426   11600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:03:32.378154   11600 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:03:32.378447   11600 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 21:03:32.378474   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:03:32.378482   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:03:32.378489   11600 start_flags.go:277] config:
	{Name:no-preload-20210813205915-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.107 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Li
stenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:03:32.378585   11600 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:30.512688   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:33.010993   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:32.670472   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:35.171315   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:30.963285   11447 out.go:177] * Restarting existing kvm2 VM for "default-k8s-different-port-20210813210102-30853" ...
	I0813 21:03:30.963310   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Start
	I0813 21:03:30.963467   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Ensuring networks are active...
	I0813 21:03:30.965431   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Ensuring network default is active
	I0813 21:03:30.965733   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Ensuring network mk-default-k8s-different-port-20210813210102-30853 is active
	I0813 21:03:30.966083   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Getting domain xml...
	I0813 21:03:30.968061   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Creating domain...
	I0813 21:03:31.416170   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Waiting to get IP...
	I0813 21:03:31.417365   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.418005   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has current primary IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.418042   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Found IP for machine: 192.168.50.136
	I0813 21:03:31.418064   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Reserving static IP address...
	I0813 21:03:31.418520   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "default-k8s-different-port-20210813210102-30853", mac: "52:54:00:37:ca:98", ip: "192.168.50.136"} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:01:32 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:31.418572   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | skip adding static IP to network mk-default-k8s-different-port-20210813210102-30853 - found existing host DHCP lease matching {name: "default-k8s-different-port-20210813210102-30853", mac: "52:54:00:37:ca:98", ip: "192.168.50.136"}
	I0813 21:03:31.418592   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Reserved static IP address: 192.168.50.136
	I0813 21:03:31.418609   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Waiting for SSH to be available...
	I0813 21:03:31.418628   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Getting to WaitForSSH function...
	I0813 21:03:31.424645   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.425050   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:01:32 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:31.425182   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:31.425389   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH client type: external
	I0813 21:03:31.425422   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa (-rw-------)
	I0813 21:03:31.425464   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:03:31.425482   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | About to run SSH command:
	I0813 21:03:31.425509   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | exit 0
	I0813 21:03:32.380458   11600 out.go:177] * Starting control plane node no-preload-20210813205915-30853 in cluster no-preload-20210813205915-30853
	I0813 21:03:32.380479   11600 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:03:32.380628   11600 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/config.json ...
	I0813 21:03:32.380658   11600 cache.go:108] acquiring lock: {Name:mkb38baead8d508ff836651dee18a7788cf32c81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380644   11600 cache.go:108] acquiring lock: {Name:mk46180cf67d5c541fa2597ef8e0122b51c3d66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380670   11600 cache.go:108] acquiring lock: {Name:mk7bb3b696fd3372110b0be599d95315e027c7ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380696   11600 cache.go:108] acquiring lock: {Name:mkf1d6f5d79a8fed4d2cc99505f5f3464b88e46a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380719   11600 cache.go:108] acquiring lock: {Name:mk828c96511ca39b5ec24da9b6afedd4727bdcf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380743   11600 cache.go:108] acquiring lock: {Name:mk03e6bcc333bfad143239419641099a94fed11e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380784   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 21:03:32.380790   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0813 21:03:32.380787   11600 cache.go:108] acquiring lock: {Name:mk928ab7caca14c2ebd27b364dc38d466ea61870 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380747   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0813 21:03:32.380809   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 21:03:32.380803   11600 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 161.844µs
	I0813 21:03:32.380822   11600 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 21:03:32.380808   11600 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 149.17µs
	I0813 21:03:32.380819   11600 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 164.006µs
	I0813 21:03:32.380839   11600 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0813 21:03:32.380837   11600 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:03:32.380848   11600 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0813 21:03:32.380801   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0813 21:03:32.380838   11600 cache.go:108] acquiring lock: {Name:mk3d501986e0e48ddd0db3c6e93347910f1116e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380854   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0813 21:03:32.380853   11600 cache.go:108] acquiring lock: {Name:mkf7939d465d516c835d7d7703c105943f1ade9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380867   11600 start.go:313] acquiring machines lock for no-preload-20210813205915-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:03:32.380868   11600 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 155.968µs
	I0813 21:03:32.380881   11600 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0813 21:03:32.380876   11600 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 155.847µs
	I0813 21:03:32.380760   11600 cache.go:108] acquiring lock: {Name:mkec6e53ab9796f80ec65d6b99a6c3ee881fedd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:03:32.380890   11600 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380896   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0813 21:03:32.380899   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0813 21:03:32.380841   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0813 21:03:32.380909   11600 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 73.516µs
	I0813 21:03:32.380913   11600 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 62.387µs
	I0813 21:03:32.380921   11600 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380939   11600 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380925   11600 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 136.425µs
	I0813 21:03:32.380966   11600 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0813 21:03:32.380936   11600 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 21:03:32.380982   11600 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 225.197µs
	I0813 21:03:32.380995   11600 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 21:03:32.380828   11600 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.9µs
	I0813 21:03:32.381004   11600 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 21:03:32.381012   11600 cache.go:88] Successfully saved all images to host disk.
	I0813 21:03:35.012590   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:37.514197   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:37.669098   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:40.168374   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:40.013348   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:42.014535   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:42.670990   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:44.671751   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:43.440320   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | SSH cmd err, output: exit status 255: 
	I0813 21:03:43.440353   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0813 21:03:43.440363   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | command : exit 0
	I0813 21:03:43.440369   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | err     : exit status 255
	I0813 21:03:43.440381   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | output  : 
	I0813 21:03:47.896090   11600 start.go:317] acquired machines lock for "no-preload-20210813205915-30853" in 15.515202861s
	I0813 21:03:47.896143   11600 start.go:93] Skipping create...Using existing machine configuration
	I0813 21:03:47.896154   11600 fix.go:55] fixHost starting: 
	I0813 21:03:47.896500   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:03:47.896553   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:03:47.909531   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37953
	I0813 21:03:47.909942   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:03:47.910569   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:03:47.910588   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:03:47.910953   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:03:47.911154   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:03:47.911327   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:03:47.913763   11600 fix.go:108] recreateIfNeeded on no-preload-20210813205915-30853: state=Stopped err=<nil>
	I0813 21:03:47.913791   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	W0813 21:03:47.913946   11600 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 21:03:44.511774   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:46.514028   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:48.515447   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:47.170765   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:49.174655   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:46.440683   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Getting to WaitForSSH function...
	I0813 21:03:46.445948   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.446304   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.446340   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.446496   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH client type: external
	I0813 21:03:46.446533   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa (-rw-------)
	I0813 21:03:46.446579   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:03:46.446601   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | About to run SSH command:
	I0813 21:03:46.446618   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | exit 0
	I0813 21:03:46.582984   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 21:03:46.583312   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetConfigRaw
	I0813 21:03:46.584076   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:03:46.589266   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.589559   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.589588   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.589810   11447 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/config.json ...
	I0813 21:03:46.590017   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:46.590212   11447 machine.go:88] provisioning docker machine ...
	I0813 21:03:46.590232   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:46.590407   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetMachineName
	I0813 21:03:46.590545   11447 buildroot.go:166] provisioning hostname "default-k8s-different-port-20210813210102-30853"
	I0813 21:03:46.590576   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetMachineName
	I0813 21:03:46.590701   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.595270   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.595544   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.595577   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.595711   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:46.595884   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.596013   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.596117   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:46.596285   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:46.596463   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:46.596478   11447 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210813210102-30853 && echo "default-k8s-different-port-20210813210102-30853" | sudo tee /etc/hostname
	I0813 21:03:46.733223   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210813210102-30853
	
	I0813 21:03:46.733252   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.739002   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.739323   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.739359   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.739481   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:46.739690   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.739849   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.739990   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:46.740161   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:46.740320   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:46.740349   11447 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210813210102-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210813210102-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210813210102-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:03:46.872322   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:03:46.872366   11447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:03:46.872403   11447 buildroot.go:174] setting up certificates
	I0813 21:03:46.872413   11447 provision.go:83] configureAuth start
	I0813 21:03:46.872433   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetMachineName
	I0813 21:03:46.872715   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:03:46.878075   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.878404   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.878459   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.878540   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.882767   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.883077   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.883108   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.883225   11447 provision.go:138] copyHostCerts
	I0813 21:03:46.883299   11447 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:03:46.883314   11447 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:03:46.883398   11447 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:03:46.883517   11447 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:03:46.883530   11447 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:03:46.883563   11447 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:03:46.883642   11447 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:03:46.883654   11447 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:03:46.883682   11447 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 21:03:46.883763   11447 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210813210102-30853 san=[192.168.50.136 192.168.50.136 localhost 127.0.0.1 minikube default-k8s-different-port-20210813210102-30853]
	I0813 21:03:46.987158   11447 provision.go:172] copyRemoteCerts
	I0813 21:03:46.987214   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:03:46.987238   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:46.992216   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.992440   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:46.992475   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:46.992656   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:46.992817   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:46.992969   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:46.993066   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:47.083216   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0813 21:03:47.100865   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:03:47.117328   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:03:47.134074   11447 provision.go:86] duration metric: configureAuth took 261.642322ms
	I0813 21:03:47.134094   11447 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:03:47.134262   11447 config.go:177] Loaded profile config "default-k8s-different-port-20210813210102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:03:47.134353   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.139472   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.139780   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.139807   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.139944   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.140097   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.140275   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.140411   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.140599   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:47.140769   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:47.140790   11447 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 21:03:47.633895   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 21:03:47.633930   11447 machine.go:91] provisioned docker machine in 1.043703131s
	I0813 21:03:47.633942   11447 start.go:267] post-start starting for "default-k8s-different-port-20210813210102-30853" (driver="kvm2")
	I0813 21:03:47.633950   11447 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:03:47.633971   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.634293   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:03:47.634328   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.639277   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.639636   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.639663   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.639786   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.639947   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.640111   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.640242   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:47.734400   11447 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:03:47.740052   11447 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:03:47.740071   11447 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:03:47.740130   11447 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:03:47.740231   11447 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 21:03:47.740344   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:03:47.747174   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:03:47.764416   11447 start.go:270] post-start completed in 130.462296ms
	I0813 21:03:47.764450   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.764711   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.770040   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.770384   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.770431   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.770530   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.770719   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.770894   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.771070   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.771253   11447 main.go:130] libmachine: Using SSH client type: native
	I0813 21:03:47.771444   11447 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0813 21:03:47.771459   11447 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:03:47.895861   11447 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888627.837623344
	
	I0813 21:03:47.895892   11447 fix.go:212] guest clock: 1628888627.837623344
	I0813 21:03:47.895903   11447 fix.go:225] Guest: 2021-08-13 21:03:47.837623344 +0000 UTC Remote: 2021-08-13 21:03:47.764694239 +0000 UTC m=+16.980843358 (delta=72.929105ms)
	I0813 21:03:47.895929   11447 fix.go:196] guest clock delta is within tolerance: 72.929105ms
	I0813 21:03:47.895937   11447 fix.go:57] fixHost completed within 16.950003029s
	I0813 21:03:47.895942   11447 start.go:80] releasing machines lock for "default-k8s-different-port-20210813210102-30853", held for 16.950031669s
	I0813 21:03:47.896001   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.896297   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:03:47.901493   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.901838   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.901870   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.902050   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.902228   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.902715   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:03:47.902976   11447 ssh_runner.go:149] Run: systemctl --version
	I0813 21:03:47.902995   11447 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:03:47.903007   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.903040   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:03:47.909125   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.909422   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.909452   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.909630   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.909813   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.909935   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.910059   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:47.910088   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.910489   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:03:47.910527   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:03:47.910654   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:03:47.910777   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:03:47.910927   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:03:47.911072   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:03:48.006087   11447 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 21:03:48.006215   11447 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:03:47.916188   11600 out.go:177] * Restarting existing kvm2 VM for "no-preload-20210813205915-30853" ...
	I0813 21:03:47.916218   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Start
	I0813 21:03:47.916374   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring networks are active...
	I0813 21:03:47.918363   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring network default is active
	I0813 21:03:47.918666   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Ensuring network mk-no-preload-20210813205915-30853 is active
	I0813 21:03:47.919177   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Getting domain xml...
	I0813 21:03:47.921207   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Creating domain...
	I0813 21:03:48.385941   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Waiting to get IP...
	I0813 21:03:48.387086   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.387686   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Found IP for machine: 192.168.105.107
	I0813 21:03:48.387718   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Reserving static IP address...
	I0813 21:03:48.387738   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has current primary IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.388204   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "no-preload-20210813205915-30853", mac: "52:54:00:60:d2:3d", ip: "192.168.105.107"} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 21:59:33 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:03:48.388236   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Reserved static IP address: 192.168.105.107
	I0813 21:03:48.388276   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | skip adding static IP to network mk-no-preload-20210813205915-30853 - found existing host DHCP lease matching {name: "no-preload-20210813205915-30853", mac: "52:54:00:60:d2:3d", ip: "192.168.105.107"}
	I0813 21:03:48.388306   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Getting to WaitForSSH function...
	I0813 21:03:48.388326   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Waiting for SSH to be available...
	I0813 21:03:48.393946   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.394418   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 21:59:33 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:03:48.394445   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:03:48.394706   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH client type: external
	I0813 21:03:48.394790   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa (-rw-------)
	I0813 21:03:48.394865   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:03:48.394885   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | About to run SSH command:
	I0813 21:03:48.394902   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | exit 0
	I0813 21:03:51.014322   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:53.517299   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:51.667636   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:53.672798   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:52.032310   11447 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.026067051s)
	I0813 21:03:52.032472   11447 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 21:03:52.032533   11447 ssh_runner.go:149] Run: which lz4
	I0813 21:03:52.036917   11447 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:03:52.041879   11447 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:03:52.041911   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0813 21:03:54.836023   11447 crio.go:362] Took 2.799141 seconds to copy over tarball
	I0813 21:03:54.836104   11447 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:03:56.016199   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:58.747725   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:56.174092   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:58.745387   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:03:57.599639   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | SSH cmd err, output: exit status 255: 
	I0813 21:03:58.136181   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0813 21:03:58.136210   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | command : exit 0
	I0813 21:03:58.136247   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | err     : exit status 255
	I0813 21:03:58.136301   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | output  : 
	I0813 21:04:00.599792   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Getting to WaitForSSH function...
	I0813 21:04:00.606127   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:00.606561   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:00.606599   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:00.606684   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH client type: external
	I0813 21:04:00.606710   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa (-rw-------)
	I0813 21:04:00.606759   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:04:00.606779   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | About to run SSH command:
	I0813 21:04:00.606791   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | exit 0
	I0813 21:04:01.865012   11447 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (7.028876371s)
	I0813 21:04:01.865051   11447 crio.go:369] Took 7.028990 seconds t extract the tarball
	I0813 21:04:01.865065   11447 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:04:01.909459   11447 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 21:04:01.921741   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 21:04:01.931836   11447 docker.go:153] disabling docker service ...
	I0813 21:04:01.931885   11447 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:04:01.943769   11447 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:04:01.957001   11447 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:04:02.141489   11447 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:04:02.286672   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:04:02.301487   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:04:02.316482   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 21:04:02.324481   11447 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:04:02.332086   11447 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:04:02.332135   11447 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:04:02.348397   11447 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:04:02.355704   11447 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:04:02.519419   11447 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 21:04:02.853377   11447 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 21:04:02.853455   11447 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 21:04:02.859109   11447 start.go:413] Will wait 60s for crictl version
	I0813 21:04:02.859179   11447 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:04:02.895788   11447 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 21:04:02.895871   11447 ssh_runner.go:149] Run: crio --version
	I0813 21:04:02.973856   11447 ssh_runner.go:149] Run: crio --version
	I0813 21:04:01.014560   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:03.513509   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:01.169481   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:04.824663   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:04.802040   11447 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 21:04:04.802102   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetIP
	I0813 21:04:04.808733   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:04:04.809248   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:04:04.809286   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:04:04.809574   11447 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0813 21:04:04.815288   11447 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:04.828595   11447 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 21:04:04.828664   11447 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:04:04.877574   11447 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:04:04.877604   11447 crio.go:333] Images already preloaded, skipping extraction
	I0813 21:04:04.877660   11447 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:04:04.914222   11447 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:04:04.914249   11447 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:04:04.914336   11447 ssh_runner.go:149] Run: crio config
	I0813 21:04:05.157389   11447 cni.go:93] Creating CNI manager for ""
	I0813 21:04:05.157412   11447 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:04:05.157424   11447 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:04:05.157439   11447 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.136 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210813210102-30853 NodeName:default-k8s-different-port-20210813210102-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.136
CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:04:05.157622   11447 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.136
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "default-k8s-different-port-20210813210102-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:04:05.157727   11447 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=default-k8s-different-port-20210813210102-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.136 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813210102-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0813 21:04:05.157774   11447 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 21:04:05.167087   11447 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:04:05.167155   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:04:05.175473   11447 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (528 bytes)
	I0813 21:04:05.188753   11447 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 21:04:05.201467   11447 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0813 21:04:05.215461   11447 ssh_runner.go:149] Run: grep 192.168.50.136	control-plane.minikube.internal$ /etc/hosts
	I0813 21:04:05.220200   11447 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:05.231726   11447 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853 for IP: 192.168.50.136
	I0813 21:04:05.231797   11447 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:04:05.231825   11447 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:04:05.231898   11447 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.key
	I0813 21:04:05.231928   11447 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/apiserver.key.cb5546de
	I0813 21:04:05.231952   11447 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/proxy-client.key
	I0813 21:04:05.232111   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 21:04:05.232165   11447 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 21:04:05.232188   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:04:05.232232   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:04:05.232271   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:04:05.232307   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 21:04:05.232379   11447 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:04:05.233804   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:04:05.253715   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:04:05.273351   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:04:05.290830   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 21:04:05.308416   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:04:05.326529   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 21:04:05.346664   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:04:05.364492   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 21:04:05.381949   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:04:05.399680   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 21:04:05.419759   11447 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 21:04:05.438209   11447 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:04:05.450680   11447 ssh_runner.go:149] Run: openssl version
	I0813 21:04:05.457245   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:04:05.465670   11447 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:05.470976   11447 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:05.471018   11447 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:05.477477   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:04:05.486446   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 21:04:05.494612   11447 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 21:04:05.499391   11447 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 21:04:05.499438   11447 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 21:04:05.505622   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 21:04:05.514421   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 21:04:05.523408   11447 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 21:04:05.528337   11447 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 21:04:05.528382   11447 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 21:04:05.535765   11447 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:04:05.544593   11447 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210813210102-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813210102-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.136 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:tr
ue system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:04:05.544684   11447 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 21:04:05.544726   11447 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:05.585256   11447 cri.go:76] found id: ""
	I0813 21:04:05.585334   11447 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:04:05.593681   11447 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:04:05.593711   11447 kubeadm.go:600] restartCluster start
	I0813 21:04:05.593760   11447 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:04:05.602117   11447 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:05.603061   11447 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210813210102-30853" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:04:05.603385   11447 kubeconfig.go:128] "default-k8s-different-port-20210813210102-30853" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:04:05.604147   11447 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:04:05.606733   11447 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:04:05.614257   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:05.614297   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:05.624492   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:02.775071   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 21:04:02.775420   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetConfigRaw
	I0813 21:04:02.776115   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:02.782201   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.782674   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:02.782712   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.783141   11600 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/config.json ...
	I0813 21:04:02.783367   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:02.783571   11600 machine.go:88] provisioning docker machine ...
	I0813 21:04:02.783598   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:02.783770   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetMachineName
	I0813 21:04:02.783946   11600 buildroot.go:166] provisioning hostname "no-preload-20210813205915-30853"
	I0813 21:04:02.783971   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetMachineName
	I0813 21:04:02.784147   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:02.789849   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.790287   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:02.790320   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.790441   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:02.790578   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.790777   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.790928   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:02.791095   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:02.791315   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:02.791336   11600 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210813205915-30853 && echo "no-preload-20210813205915-30853" | sudo tee /etc/hostname
	I0813 21:04:02.946559   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210813205915-30853
	
	I0813 21:04:02.946596   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:02.952957   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.953358   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:02.953393   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:02.953568   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:02.953745   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.953960   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:02.954167   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:02.954385   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:02.954624   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:02.954665   11600 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210813205915-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210813205915-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210813205915-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:04:03.094292   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:04:03.094324   11600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:04:03.094356   11600 buildroot.go:174] setting up certificates
	I0813 21:04:03.094369   11600 provision.go:83] configureAuth start
	I0813 21:04:03.094384   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetMachineName
	I0813 21:04:03.094688   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:03.100354   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.100706   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.100739   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.100946   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:03.105867   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.106237   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.106310   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.106463   11600 provision.go:138] copyHostCerts
	I0813 21:04:03.106530   11600 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:04:03.106543   11600 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:04:03.106590   11600 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:04:03.106682   11600 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:04:03.106693   11600 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:04:03.106720   11600 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:04:03.106783   11600 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:04:03.106793   11600 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:04:03.106815   11600 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 21:04:03.106882   11600 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210813205915-30853 san=[192.168.105.107 192.168.105.107 localhost 127.0.0.1 minikube no-preload-20210813205915-30853]
	I0813 21:04:03.232637   11600 provision.go:172] copyRemoteCerts
	I0813 21:04:03.232735   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:04:03.232781   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:03.238750   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.239227   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.239262   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.239441   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:03.239634   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:03.239802   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:03.239979   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:03.330067   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:04:03.347432   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 21:04:03.580187   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:04:03.733835   11600 provision.go:86] duration metric: configureAuth took 639.447362ms
	I0813 21:04:03.733873   11600 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:04:03.734092   11600 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:04:03.734225   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:03.740654   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.741046   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:03.741091   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:03.741217   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:03.741420   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:03.741586   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:03.741748   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:03.741941   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:03.742078   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:03.742093   11600 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 21:04:04.399833   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 21:04:04.399867   11600 machine.go:91] provisioned docker machine in 1.616277375s
	I0813 21:04:04.399881   11600 start.go:267] post-start starting for "no-preload-20210813205915-30853" (driver="kvm2")
	I0813 21:04:04.399888   11600 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:04:04.399909   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.400282   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:04:04.400324   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.406533   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.406945   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.406987   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.407240   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.407441   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.407578   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.407746   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:04.498949   11600 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:04:04.503867   11600 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:04:04.503896   11600 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:04:04.503972   11600 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:04:04.504097   11600 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 21:04:04.504223   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:04:04.511733   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:04:04.528408   11600 start.go:270] post-start completed in 128.513758ms
	I0813 21:04:04.528443   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.528707   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.534254   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.534663   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.534695   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.534799   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.534987   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.535140   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.535279   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.535426   11600 main.go:130] libmachine: Using SSH client type: native
	I0813 21:04:04.535597   11600 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.105.107 22 <nil> <nil>}
	I0813 21:04:04.535608   11600 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:04:04.663945   11600 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888644.593571707
	
	I0813 21:04:04.663967   11600 fix.go:212] guest clock: 1628888644.593571707
	I0813 21:04:04.663974   11600 fix.go:225] Guest: 2021-08-13 21:04:04.593571707 +0000 UTC Remote: 2021-08-13 21:04:04.528687546 +0000 UTC m=+32.319635142 (delta=64.884161ms)
	I0813 21:04:04.663992   11600 fix.go:196] guest clock delta is within tolerance: 64.884161ms
	I0813 21:04:04.663998   11600 fix.go:57] fixHost completed within 16.76784432s
	I0813 21:04:04.664002   11600 start.go:80] releasing machines lock for "no-preload-20210813205915-30853", held for 16.76787935s
	I0813 21:04:04.664032   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.664301   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:04.670385   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.670693   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.670728   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.670905   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.671084   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.671497   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:04:04.671741   11600 ssh_runner.go:149] Run: systemctl --version
	I0813 21:04:04.671770   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.671781   11600 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:04:04.671828   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:04:04.677842   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.677920   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.678239   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.678271   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.678303   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:04.678327   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:04.678385   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.678537   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:04:04.678601   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.678680   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:04:04.678746   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.678799   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:04:04.678866   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:04.678918   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:04:04.778153   11600 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:04:04.778247   11600 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 21:04:04.790123   11600 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 21:04:04.799742   11600 docker.go:153] disabling docker service ...
	I0813 21:04:04.799795   11600 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:04:04.814660   11600 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:04:04.826371   11600 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:04:04.984940   11600 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:04:05.134330   11600 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:04:05.146967   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:04:05.162919   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 21:04:05.171969   11600 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:04:05.178773   11600 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:04:05.178830   11600 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:04:05.195828   11600 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:04:05.202754   11600 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:04:05.337419   11600 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 21:04:05.559682   11600 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 21:04:05.559752   11600 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 21:04:05.566062   11600 start.go:413] Will wait 60s for crictl version
	I0813 21:04:05.566138   11600 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:04:05.601921   11600 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 21:04:05.602001   11600 ssh_runner.go:149] Run: crio --version
	I0813 21:04:05.842661   11600 ssh_runner.go:149] Run: crio --version
	I0813 21:04:05.956395   11600 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.2 ...
	I0813 21:04:05.956450   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetIP
	I0813 21:04:05.962605   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:05.962975   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:04:05.962999   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:04:05.963185   11600 ssh_runner.go:149] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0813 21:04:05.968381   11600 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:05.979746   11600 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:04:05.979790   11600 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:04:06.037577   11600 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 21:04:06.037602   11600 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 k8s.gcr.io/kube-proxy:v1.22.0-rc.0 k8s.gcr.io/pause:3.4.1 k8s.gcr.io/etcd:3.4.13-3 k8s.gcr.io/coredns/coredns:v1.8.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0813 21:04:06.037684   11600 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 21:04:06.037756   11600 image.go:133] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.0
	I0813 21:04:06.037772   11600 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:06.037684   11600 image.go:133] retrieving image: k8s.gcr.io/pause:3.4.1
	I0813 21:04:06.037785   11600 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:06.037762   11600 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-3
	I0813 21:04:06.037738   11600 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:06.037735   11600 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:06.037741   11600 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 21:04:06.037767   11600 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:06.039362   11600 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.22.0-rc.0: Error response from daemon: reference does not exist
	I0813 21:04:06.053753   11600 image.go:171] found k8s.gcr.io/pause:3.4.1 locally: &{Image:0xc000d620e0}
	I0813 21:04:06.053840   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/pause:3.4.1
	I0813 21:04:06.454088   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:06.627170   11600 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc000a3e0e0}
	I0813 21:04:06.627262   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:06.677125   11600 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" does not exist at hash "ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c" in container runtime
	I0813 21:04:06.677177   11600 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:06.677243   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:06.772729   11600 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc000a3e3e0}
	I0813 21:04:06.772826   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 21:04:06.829141   11600 image.go:171] found k8s.gcr.io/coredns/coredns:v1.8.0 locally: &{Image:0xc00142e1e0}
	I0813 21:04:06.829237   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns/coredns:v1.8.0
	I0813 21:04:06.902889   11600 cache_images.go:106] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0813 21:04:06.902989   11600 cri.go:205] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:06.903035   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:06.902933   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0813 21:04:07.109713   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.109813   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:04:07.109896   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.117259   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0 (exists)
	I0813 21:04:07.117279   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.117314   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0813 21:04:07.171175   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0813 21:04:07.171310   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0813 21:04:05.516944   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:08.013394   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:07.172226   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:09.188184   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:05.824992   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:05.825077   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:05.837175   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.025601   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.025691   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.036326   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.225644   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.225742   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.238574   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.425637   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.425737   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.438316   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.625622   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.625698   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.643437   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:06.824708   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:06.824784   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:06.840790   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.024978   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.025048   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.042237   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.225613   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.225690   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.238533   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.424924   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.425004   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.437239   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.625345   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.625418   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.643925   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:07.825147   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:07.825246   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:07.839517   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.024742   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.024831   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.037540   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.224652   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.224733   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.237758   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.425032   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.425121   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.438563   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.624675   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.624790   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.640197   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.640219   11447 api_server.go:164] Checking apiserver status ...
	I0813 21:04:08.640266   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:08.654071   11447 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:08.654097   11447 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:04:08.654106   11447 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:04:08.654124   11447 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:04:08.654177   11447 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:08.717698   11447 cri.go:76] found id: ""
	I0813 21:04:08.717795   11447 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:04:08.753323   11447 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:04:08.778307   11447 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:04:08.778369   11447 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:08.800125   11447 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:08.800151   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:09.316586   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:10.438674   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.122049553s)
	I0813 21:04:10.438715   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:07.759123   11600 image.go:171] found k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 locally: &{Image:0xc000d620e0}
	I0813 21:04:07.759237   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:09.111081   11600 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 locally: &{Image:0xc00142e040}
	I0813 21:04:09.111212   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:09.462306   11600 image.go:171] found k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 locally: &{Image:0xc00142e140}
	I0813 21:04:09.462414   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:10.255823   11600 image.go:171] found k8s.gcr.io/etcd:3.4.13-3 locally: &{Image:0xc0012f0120}
	I0813 21:04:10.255916   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.13-3
	I0813 21:04:11.315708   11600 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc000d62460}
	I0813 21:04:11.315815   11600 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0813 21:04:10.514963   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:12.516333   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:11.670913   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:14.171134   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:10.800884   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:10.992029   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:11.167449   11447 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:04:11.167518   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:11.684011   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:12.184677   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:12.684502   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:13.184162   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:13.684035   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:14.183991   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:14.683969   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:15.184603   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:15.684380   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:13.372670   11600 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (6.201329225s)
	I0813 21:04:13.372706   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0: (6.255368199s)
	I0813 21:04:13.372718   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0813 21:04:13.372732   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 from cache
	I0813 21:04:13.372728   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.22.0-rc.0: (5.613461548s)
	I0813 21:04:13.372758   11600 crio.go:191] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0813 21:04:13.372783   11600 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" does not exist at hash "7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75" in container runtime
	I0813 21:04:13.372830   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.22.0-rc.0: (3.910399102s)
	I0813 21:04:13.372858   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0813 21:04:13.372868   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.13-3: (3.116939311s)
	I0813 21:04:13.372873   11600 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" does not exist at hash "b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a" in container runtime
	I0813 21:04:13.372900   11600 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:13.372831   11600 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:13.372924   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0: (2.057095132s)
	I0813 21:04:13.372931   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:13.372936   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:13.372786   11600 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0: (4.261556732s)
	I0813 21:04:13.373009   11600 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" does not exist at hash "cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c" in container runtime
	I0813 21:04:13.373032   11600 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:13.373056   11600 ssh_runner.go:149] Run: which crictl
	I0813 21:04:13.381245   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0813 21:04:13.381490   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0813 21:04:15.288527   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.915644282s)
	I0813 21:04:15.288559   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0813 21:04:15.288601   11600 ssh_runner.go:189] Completed: which crictl: (1.91552977s)
	I0813 21:04:15.288660   11600 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0813 21:04:15.288670   11600 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.22.0-rc.0: (1.907403335s)
	I0813 21:04:15.288709   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:15.288741   11600 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.22.0-rc.0: (1.90722818s)
	I0813 21:04:15.288782   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.288805   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:15.288858   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.323185   11600 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:15.323264   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0 (exists)
	I0813 21:04:15.323283   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.323302   11600 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:15.323314   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0 (exists)
	I0813 21:04:15.323320   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0813 21:04:15.329111   11600 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0 (exists)
	I0813 21:04:15.011212   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:17.011691   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:16.670490   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:19.170343   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:16.184356   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:16.684936   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:17.184954   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:17.684681   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:18.184911   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:18.684242   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:19.184095   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:19.683984   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:20.184175   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:20.210489   11447 api_server.go:70] duration metric: took 9.043039811s to wait for apiserver process to appear ...
	I0813 21:04:20.210519   11447 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:04:20.210533   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:20.211291   11447 api_server.go:255] stopped: https://192.168.50.136:8444/healthz: Get "https://192.168.50.136:8444/healthz": dial tcp 192.168.50.136:8444: connect: connection refused
	I0813 21:04:20.711989   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:21.745565   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0: (6.422201905s)
	I0813 21:04:21.745599   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 from cache
	I0813 21:04:21.745635   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:21.745691   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0813 21:04:19.017281   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:21.514778   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:23.515219   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:21.171057   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:23.670243   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:25.713040   11447 api_server.go:255] stopped: https://192.168.50.136:8444/healthz: Get "https://192.168.50.136:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:04:24.199550   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0: (2.45382894s)
	I0813 21:04:24.199592   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 from cache
	I0813 21:04:24.199629   11600 crio.go:191] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:24.199702   11600 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0813 21:04:26.212134   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:26.605510   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:04:26.605545   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:04:26.711743   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:27.047887   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:04:27.047925   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:04:27.212219   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:27.218272   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:04:27.218303   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:04:27.711515   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:27.725621   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:04:27.725665   11447 api_server.go:101] status: https://192.168.50.136:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:04:28.212046   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:04:28.224546   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 200:
	ok
	I0813 21:04:28.234553   11447 api_server.go:139] control plane version: v1.21.3
	I0813 21:04:28.234579   11447 api_server.go:129] duration metric: took 8.024053155s to wait for apiserver health ...
	I0813 21:04:28.234595   11447 cni.go:93] Creating CNI manager for ""
	I0813 21:04:28.234616   11447 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:04:26.019080   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:28.516769   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:25.670866   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:27.671923   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:30.171118   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:28.236904   11447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:04:28.236969   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:04:28.252383   11447 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:04:28.300743   11447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:04:28.320179   11447 system_pods.go:59] 8 kube-system pods found
	I0813 21:04:28.320225   11447 system_pods.go:61] "coredns-558bd4d5db-v2sv5" [3b82b811-5e28-41dc-b0e1-71233efc654e] Running
	I0813 21:04:28.320234   11447 system_pods.go:61] "etcd-default-k8s-different-port-20210813210102-30853" [89cff97c-ff5c-4920-a05f-1ec7b313043b] Running
	I0813 21:04:28.320241   11447 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813210102-30853" [734380ac-398d-4b51-a67f-aaac2457110c] Running
	I0813 21:04:28.320252   11447 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813210102-30853" [ebc5d291-624f-4c49-b9cb-436204a7665a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 21:04:28.320261   11447 system_pods.go:61] "kube-proxy-99cxm" [a1bfba1d-d9fb-4d24-abe9-fd0522c591f0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 21:04:28.320271   11447 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813210102-30853" [b66e01ad-943e-4a2c-aabe-d18f92fd5eb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0813 21:04:28.320290   11447 system_pods.go:61] "metrics-server-7c784ccb57-xfj59" [b522ac66-040a-4030-a817-c422c703b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:04:28.320308   11447 system_pods.go:61] "storage-provisioner" [d59ea453-ed7b-4952-bd61-7993245a1986] Running
	I0813 21:04:28.320315   11447 system_pods.go:74] duration metric: took 19.546937ms to wait for pod list to return data ...
	I0813 21:04:28.320330   11447 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:04:28.329682   11447 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:04:28.329749   11447 node_conditions.go:123] node cpu capacity is 2
	I0813 21:04:28.329769   11447 node_conditions.go:105] duration metric: took 9.429948ms to run NodePressure ...
	I0813 21:04:28.329793   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:29.546168   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.216348804s)
	I0813 21:04:29.546210   11447 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:04:29.563341   11447 kubeadm.go:746] kubelet initialised
	I0813 21:04:29.563369   11447 kubeadm.go:747] duration metric: took 17.148102ms waiting for restarted kubelet to initialise ...
	I0813 21:04:29.563380   11447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:04:29.573196   11447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace to be "Ready" ...
	I0813 21:04:29.338170   11600 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0: (5.138437758s)
	I0813 21:04:29.338201   11600 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 from cache
	I0813 21:04:29.338230   11600 cache_images.go:113] Successfully loaded all cached images
	I0813 21:04:29.338242   11600 cache_images.go:82] LoadImages completed in 23.300623842s
	I0813 21:04:29.338374   11600 ssh_runner.go:149] Run: crio config
	I0813 21:04:29.638116   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:04:29.638137   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:04:29.638149   11600 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:04:29.638162   11600 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.107 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210813205915-30853 NodeName:no-preload-20210813205915-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.105.107 CgroupDriver:systemd Cl
ientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:04:29.638336   11600 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "no-preload-20210813205915-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:04:29.638444   11600 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=no-preload-20210813205915-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.105.107 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:04:29.638511   11600 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 21:04:29.651119   11600 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:04:29.651199   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:04:29.658178   11600 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (518 bytes)
	I0813 21:04:29.674188   11600 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 21:04:29.689809   11600 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2086 bytes)
	I0813 21:04:29.704568   11600 ssh_runner.go:149] Run: grep 192.168.105.107	control-plane.minikube.internal$ /etc/hosts
	I0813 21:04:29.709516   11600 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:04:29.722084   11600 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853 for IP: 192.168.105.107
	I0813 21:04:29.722165   11600 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:04:29.722197   11600 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:04:29.722281   11600 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.key
	I0813 21:04:29.722312   11600 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/apiserver.key.209a1939
	I0813 21:04:29.722343   11600 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/proxy-client.key
	I0813 21:04:29.722473   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 21:04:29.722561   11600 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 21:04:29.722580   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:04:29.722661   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:04:29.722712   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:04:29.722757   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 21:04:29.722866   11600 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:04:29.724368   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:04:29.746769   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:04:29.768192   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:04:29.786871   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 21:04:29.806532   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:04:29.825599   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 21:04:29.847494   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:04:29.870257   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 21:04:29.892328   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:04:29.912923   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 21:04:29.931703   11600 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 21:04:29.951536   11600 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:04:29.968398   11600 ssh_runner.go:149] Run: openssl version
	I0813 21:04:29.976170   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:04:29.984473   11600 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:29.989429   11600 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:29.989476   11600 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:04:29.995576   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:04:30.003420   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 21:04:30.011665   11600 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 21:04:30.017989   11600 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 21:04:30.018036   11600 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 21:04:30.025928   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 21:04:30.036305   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 21:04:30.046763   11600 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 21:04:30.052505   11600 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 21:04:30.052558   11600 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 21:04:30.059983   11600 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:04:30.068353   11600 kubeadm.go:390] StartCluster: {Name:no-preload-20210813205915-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.
0-rc.0 ClusterName:no-preload-20210813205915-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.107 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:04:30.068511   11600 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 21:04:30.068563   11600 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:30.103079   11600 cri.go:76] found id: ""
	I0813 21:04:30.103167   11600 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:04:30.112165   11600 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:04:30.112188   11600 kubeadm.go:600] restartCluster start
	I0813 21:04:30.112242   11600 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:04:30.120196   11600 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.121712   11600 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210813205915-30853" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:04:30.122350   11600 kubeconfig.go:128] "no-preload-20210813205915-30853" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:04:30.123522   11600 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:04:30.127714   11600 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:04:30.134966   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.135011   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.144537   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.344893   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.345009   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.354676   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.544891   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.544966   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.554560   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.744600   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.744692   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.756935   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:30.945184   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:30.945265   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:30.955263   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.145650   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.145758   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.157682   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.344971   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.345039   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.354648   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.544933   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.545001   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.554862   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.745107   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.745178   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.756702   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.945036   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:31.945134   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:31.956052   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.145356   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.145486   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.154892   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:31.013514   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:33.515372   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:32.667378   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:34.671027   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:31.606937   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:33.614157   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:32.344907   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.344989   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.354828   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.545178   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.545268   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.554771   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.745015   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.745132   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.754451   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:32.945134   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:32.945223   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:32.958046   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:33.145379   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:33.145471   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:33.156311   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:33.156338   11600 api_server.go:164] Checking apiserver status ...
	I0813 21:04:33.156387   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:04:33.166450   11600 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:04:33.166479   11600 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:04:33.166489   11600 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:04:33.166504   11600 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:04:33.166556   11600 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:04:33.201224   11600 cri.go:76] found id: ""
	I0813 21:04:33.201320   11600 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:04:33.218274   11600 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:04:33.226895   11600 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:04:33.226953   11600 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:33.233603   11600 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:04:33.233633   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:33.409004   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.227200   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.522150   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.670047   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:04:34.781290   11600 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:04:34.781393   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:35.294318   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:35.794319   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:36.294093   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:36.794810   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:35.517996   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:38.013307   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:37.169398   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:39.667640   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:36.109861   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:38.110944   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:40.608444   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:37.294229   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:37.794174   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:38.294380   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:38.795081   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:39.295011   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:39.794912   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:40.294691   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:40.794676   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:41.294339   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:41.794517   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:40.514739   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:42.515815   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:41.674615   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:44.171008   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:43.111611   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:45.608557   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:42.294762   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:42.794735   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:43.294817   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:43.794556   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:04:43.818714   11600 api_server.go:70] duration metric: took 9.037423183s to wait for apiserver process to appear ...
	I0813 21:04:43.818749   11600 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:04:43.818763   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:43.819314   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": dial tcp 192.168.105.107:8443: connect: connection refused
	I0813 21:04:44.319959   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:45.012244   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:47.016481   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:46.672075   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:49.172907   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:47.615450   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:50.112038   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:49.320842   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:04:49.820028   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:49.514363   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:52.012464   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:51.669686   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:53.793699   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:52.607875   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:54.608704   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:54.821107   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:04:55.319665   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:04:54.013451   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:56.512870   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:58.517483   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:56.168752   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:58.169636   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:57.108818   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:04:59.110668   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:00.319940   11600 api_server.go:255] stopped: https://192.168.105.107:8443/healthz: Get "https://192.168.105.107:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:05:00.819508   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:01.018546   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:03.515645   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:00.668977   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:02.670402   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:05.170956   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:01.618304   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:04.109034   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:05.157882   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:05:05.158001   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:05:05.320212   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:05.504416   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:05:05.504471   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:05:05.819967   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:05.864291   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:05:05.864338   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:05:06.319440   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:06.332338   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:05:06.332364   11600 api_server.go:101] status: https://192.168.105.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:05:06.820046   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:05:06.827164   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 200:
	ok
	I0813 21:05:06.836155   11600 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:05:06.836176   11600 api_server.go:129] duration metric: took 23.017420085s to wait for apiserver health ...
	I0813 21:05:06.836188   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:05:06.836198   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:05:06.838586   11600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:05:06.838684   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:05:06.847037   11600 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:05:06.865264   11600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:05:06.893537   11600 system_pods.go:59] 8 kube-system pods found
	I0813 21:05:06.893572   11600 system_pods.go:61] "coredns-78fcd69978-wqktx" [84e2ed0e-2c5a-4dcc-a8de-2cee9f92d267] Running
	I0813 21:05:06.893578   11600 system_pods.go:61] "etcd-no-preload-20210813205915-30853" [de55bcf6-20c8-4b4a-81e0-b181cca0e618] Running
	I0813 21:05:06.893582   11600 system_pods.go:61] "kube-apiserver-no-preload-20210813205915-30853" [53002765-155d-4f17-b484-2fe4e088255d] Running
	I0813 21:05:06.893587   11600 system_pods.go:61] "kube-controller-manager-no-preload-20210813205915-30853" [6052be3c-51df-4a5c-b8a1-6a5a64b4d241] Running
	I0813 21:05:06.893594   11600 system_pods.go:61] "kube-proxy-vvkkd" [c6eef664-f71d-4d0f-aec7-8942b5977520] Running
	I0813 21:05:06.893599   11600 system_pods.go:61] "kube-scheduler-no-preload-20210813205915-30853" [24d521ca-7b13-4b06-805d-7b568471cffb] Running
	I0813 21:05:06.893615   11600 system_pods.go:61] "metrics-server-7c784ccb57-rfp5v" [8c3b111e-0b1d-4a36-85ab-49fe495a538e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:05:06.893629   11600 system_pods.go:61] "storage-provisioner" [dfb23af4-15d2-420e-8720-c4fee1cf94f8] Running
	I0813 21:05:06.893637   11600 system_pods.go:74] duration metric: took 28.354614ms to wait for pod list to return data ...
	I0813 21:05:06.893648   11600 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:05:06.916270   11600 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:05:06.916300   11600 node_conditions.go:123] node cpu capacity is 2
	I0813 21:05:06.916316   11600 node_conditions.go:105] duration metric: took 22.662818ms to run NodePressure ...
	I0813 21:05:06.916337   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:05.516343   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.517331   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.670058   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:09.675888   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:06.111044   11447 pod_ready.go:102] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.608567   11447 pod_ready.go:92] pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.608606   11447 pod_ready.go:81] duration metric: took 38.035378096s waiting for pod "coredns-558bd4d5db-v2sv5" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.608620   11447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.615404   11447 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.615428   11447 pod_ready.go:81] duration metric: took 6.797829ms waiting for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.615442   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.630269   11447 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.630291   11447 pod_ready.go:81] duration metric: took 14.84004ms waiting for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.630301   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.637173   11447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.637191   11447 pod_ready.go:81] duration metric: took 6.881994ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.637205   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-99cxm" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.641787   11447 pod_ready.go:92] pod "kube-proxy-99cxm" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:07.641806   11447 pod_ready.go:81] duration metric: took 4.592412ms waiting for pod "kube-proxy-99cxm" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:07.641816   11447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:08.006732   11447 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:08.006761   11447 pod_ready.go:81] duration metric: took 364.934714ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:08.006777   11447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:10.416206   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:07.404648   11600 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:05:07.414912   11600 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0813 21:05:07.708787   11600 retry.go:31] will retry after 540.190908ms: kubelet not initialised
	I0813 21:05:08.256390   11600 kubeadm.go:746] kubelet initialised
	I0813 21:05:08.256419   11600 kubeadm.go:747] duration metric: took 851.738381ms waiting for restarted kubelet to initialise ...
	I0813 21:05:08.256432   11600 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:05:08.265413   11600 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-wqktx" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:10.372610   11600 pod_ready.go:102] pod "coredns-78fcd69978-wqktx" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:10.016406   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.513411   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.171097   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:14.667560   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.416520   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:14.917152   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:12.791126   11600 pod_ready.go:102] pod "coredns-78fcd69978-wqktx" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:15.296951   11600 pod_ready.go:92] pod "coredns-78fcd69978-wqktx" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:15.296981   11600 pod_ready.go:81] duration metric: took 7.031537534s waiting for pod "coredns-78fcd69978-wqktx" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:15.296992   11600 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:14.513966   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:16.518250   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:16.669467   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:18.670323   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:16.956540   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:19.413311   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:17.316436   11600 pod_ready.go:102] pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:17.817195   11600 pod_ready.go:92] pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:17.817242   11600 pod_ready.go:81] duration metric: took 2.520242337s waiting for pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:17.817255   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:17.825965   11600 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:17.825988   11600 pod_ready.go:81] duration metric: took 8.722511ms waiting for pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:17.826001   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:19.873713   11600 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:19.011904   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:21.016678   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:23.516661   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:21.171346   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:23.667746   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:21.422135   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:23.915750   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:22.369972   11600 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:22.370008   11600 pod_ready.go:81] duration metric: took 4.543995238s waiting for pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.370023   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vvkkd" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.377665   11600 pod_ready.go:92] pod "kube-proxy-vvkkd" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:22.377685   11600 pod_ready.go:81] duration metric: took 7.65301ms waiting for pod "kube-proxy-vvkkd" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.377696   11600 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.385096   11600 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:05:22.385113   11600 pod_ready.go:81] duration metric: took 7.408599ms waiting for pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:22.385121   11600 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace to be "Ready" ...
	I0813 21:05:24.402382   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:26.901061   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:26.018949   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.513145   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:25.668326   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.186367   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:26.415525   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.913863   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:28.902947   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.903048   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.516874   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:33.011959   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.666530   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:32.666799   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:34.668707   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:30.915376   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:33.415440   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:35.415962   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:33.403872   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:35.902644   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:35.014820   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:37.015893   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:37.169496   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:39.170551   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:37.918334   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:40.414297   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:38.408969   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:40.903397   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:39.017723   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:41.512620   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:43.513209   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:41.171007   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:43.668192   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:42.915720   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.423660   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:43.403450   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.445034   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.515122   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:48.013001   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:45.669651   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:48.167953   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:50.171552   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:47.916795   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:49.916975   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:47.904497   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:50.399990   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:50.512153   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.512918   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.174821   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.670257   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.414652   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.415677   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:52.404181   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.904430   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:54.515153   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:57.013806   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:57.168792   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:59.666912   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:56.416201   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:58.917986   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:57.401016   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:59.404016   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.906289   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:05:59.512815   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.514140   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.668491   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:03.668678   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:01.413828   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:03.414479   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:04.403957   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:06.901856   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:04.012166   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:06.013309   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.512931   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:06.168995   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.667450   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:05.918408   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.416404   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:10.416808   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:08.903609   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:11.405857   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:11.014642   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:13.512706   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:10.669910   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:13.170072   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:12.919893   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:15.417469   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:13.901800   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:16.402802   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:15.514827   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:18.012928   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:15.668033   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:17.668913   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.167322   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:17.914829   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.413984   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:18.405532   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.902412   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.017907   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.514292   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.170177   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:24.668943   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.416213   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:24.922905   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.902968   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:25.401882   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:25.067645   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.519637   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.167658   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:29.168133   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.413791   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:29.414145   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.402765   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:29.403392   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:31.900702   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:30.012069   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:32.014177   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:31.169296   10272 pod_ready.go:102] pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.160326   10272 pod_ready.go:81] duration metric: took 4m0.399801158s waiting for pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace to be "Ready" ...
	E0813 21:06:33.160356   10272 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-wf2ft" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:06:33.160383   10272 pod_ready.go:38] duration metric: took 4m1.6003819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:06:33.160416   10272 kubeadm.go:604] restartCluster took 4m59.137608004s
	W0813 21:06:33.160600   10272 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:06:33.160640   10272 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:06:31.419127   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.918800   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.903797   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:36.401884   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:34.015031   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:36.513631   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:36.414485   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:38.415451   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:40.416420   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:38.900640   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:40.901483   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:39.011809   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:41.013908   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:43.513605   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:42.920201   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:45.415258   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:42.905257   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:44.905610   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:45.514466   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:47.515852   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:47.415484   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:49.415708   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:47.414520   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:49.903972   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:49.517251   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:52.012858   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:51.918221   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:53.918831   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:52.402393   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:54.902136   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:54.513409   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:57.012531   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:00.392100   10272 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.231434099s)
	I0813 21:07:00.392193   10272 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:07:00.406886   10272 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:07:00.406959   10272 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:07:00.442137   10272 cri.go:76] found id: ""
	I0813 21:07:00.442208   10272 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:07:00.449499   10272 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:07:00.458330   10272 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:07:00.458372   10272 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0813 21:06:55.923186   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:58.413947   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:00.414960   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:57.401732   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:59.404622   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:01.901431   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:01.146030   10272 out.go:204]   - Generating certificates and keys ...
	I0813 21:06:59.013910   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:01.514845   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:02.514874   10272 out.go:204]   - Booting up control plane ...
	I0813 21:07:02.420421   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:04.921161   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:03.901922   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:06.400821   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:04.017697   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:06.512767   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:07.415160   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:09.916408   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:08.402752   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:10.903350   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:09.011421   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:11.015678   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:13.515855   10867 pod_ready.go:102] pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:14.594414   10272 out.go:204]   - Configuring RBAC rules ...
	I0813 21:07:15.029321   10272 cni.go:93] Creating CNI manager for ""
	I0813 21:07:15.029346   10272 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:07:15.031000   10272 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:07:15.031061   10272 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:07:15.039108   10272 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:07:15.058649   10272 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:07:15.058707   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:15.058717   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=old-k8s-version-20210813205823-30853 minikube.k8s.io/updated_at=2021_08_13T21_07_15_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:15.095343   10272 ops.go:34] apiserver oom_adj: 16
	I0813 21:07:15.095372   10272 ops.go:39] adjusting apiserver oom_adj to -10
	I0813 21:07:15.095386   10272 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:07:15.330590   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:12.413115   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:14.414512   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:13.400030   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:15.403757   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:15.505147   10867 pod_ready.go:81] duration metric: took 4m0.402080118s waiting for pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace to be "Ready" ...
	E0813 21:07:15.505169   10867 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-wcctz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:07:15.505190   10867 pod_ready.go:38] duration metric: took 4m39.330917946s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:07:15.505243   10867 kubeadm.go:604] restartCluster took 5m2.104930788s
	W0813 21:07:15.505419   10867 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:07:15.505453   10867 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:07:15.931748   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:16.430811   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:16.930834   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:17.430845   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:17.930776   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:18.431732   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:18.930812   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:19.431647   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:19.931099   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:20.431444   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:16.414885   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:18.422404   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:17.901988   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:20.403379   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:20.930893   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:21.430961   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:21.931774   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:22.431310   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:22.931068   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:23.431314   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:23.931570   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:24.431290   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:24.931320   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:25.431531   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:20.914560   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:22.914642   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:24.916586   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:22.902451   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:24.903333   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:25.931646   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:26.431685   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:26.931719   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:27.431409   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:27.930888   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:28.431524   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:28.931535   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:29.431073   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:29.931502   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:30.430962   10272 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:07:26.919653   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:29.418420   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:30.543916   10272 kubeadm.go:985] duration metric: took 15.48526077s to wait for elevateKubeSystemPrivileges.
	I0813 21:07:30.543949   10272 kubeadm.go:392] StartCluster complete in 5m56.564780701s
	I0813 21:07:30.543981   10272 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:07:30.544141   10272 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:07:30.545813   10272 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:07:31.081760   10272 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210813205823-30853" rescaled to 1
	I0813 21:07:31.081820   10272 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.83.49 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0813 21:07:31.083916   10272 out.go:177] * Verifying Kubernetes components...
	I0813 21:07:31.083983   10272 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:07:31.081886   10272 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:07:31.081888   10272 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:07:31.084080   10272 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084084   10272 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084099   10272 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.082132   10272 config.go:177] Loaded profile config "old-k8s-version-20210813205823-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	W0813 21:07:31.084108   10272 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:07:31.084120   10272 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084134   10272 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084143   10272 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210813205823-30853"
	I0813 21:07:31.084151   10272 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210813205823-30853"
	W0813 21:07:31.084158   10272 addons.go:147] addon metrics-server should already be in state true
	I0813 21:07:31.084100   10272 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210813205823-30853"
	W0813 21:07:31.084168   10272 addons.go:147] addon dashboard should already be in state true
	I0813 21:07:31.084183   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.084189   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.084158   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.084631   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084632   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084685   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.084687   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.084751   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084792   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.084631   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.084865   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.105064   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42647
	I0813 21:07:31.105078   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35401
	I0813 21:07:31.105589   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.105724   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.105733   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0813 21:07:31.105826   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0813 21:07:31.106201   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.106225   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.106288   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.106388   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.106410   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.106656   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.106795   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.106823   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.106845   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.106940   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.107274   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.107310   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.107372   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.107393   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.107505   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.107679   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.107914   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.108023   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.108066   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.108456   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.108502   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.121147   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0813 21:07:31.120919   10272 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210813205823-30853"
	W0813 21:07:31.121411   10272 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:07:31.121457   10272 host.go:66] Checking if "old-k8s-version-20210813205823-30853" exists ...
	I0813 21:07:31.121491   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45327
	I0813 21:07:31.121993   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.122297   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.122764   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.123195   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.123739   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.123763   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.123790   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.123822   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.124154   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.124287   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.124315   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.124496   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.128429   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:31.130930   10272 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:07:31.129602   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:31.130875   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45195
	I0813 21:07:31.132382   10272 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:07:31.132436   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:07:31.132451   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:07:31.132474   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.134119   10272 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:07:31.134224   10272 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:07:31.134241   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:07:31.134259   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.132855   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.135094   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.135114   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.135252   10272 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210813205823-30853" to be "Ready" ...
	I0813 21:07:31.135886   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.136518   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.140126   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.140398   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:27.404366   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:29.901079   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:31.902091   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:31.142209   10272 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:07:31.142270   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:07:31.140792   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.142282   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:07:31.140956   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.142313   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.142015   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.142337   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.142480   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.142494   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.142517   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.142738   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.142977   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.143006   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.143155   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.143333   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.143530   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.143544   10272 node_ready.go:49] node "old-k8s-version-20210813205823-30853" has status "Ready":"True"
	I0813 21:07:31.143557   10272 node_ready.go:38] duration metric: took 8.284522ms waiting for node "old-k8s-version-20210813205823-30853" to be "Ready" ...
	I0813 21:07:31.143568   10272 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:07:31.145891   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36815
	I0813 21:07:31.146234   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.146769   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.146792   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.147190   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.147843   10272 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:07:31.147892   10272 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:07:31.148364   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.148819   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.148848   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.148994   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.149157   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.149288   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.149464   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.154492   10272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:31.159199   10272 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0813 21:07:31.159608   10272 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:07:31.160083   10272 main.go:130] libmachine: Using API Version  1
	I0813 21:07:31.160107   10272 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:07:31.160442   10272 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:07:31.160628   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetState
	I0813 21:07:31.163581   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .DriverName
	I0813 21:07:31.163764   10272 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:07:31.163780   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:07:31.163796   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHHostname
	I0813 21:07:31.169112   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.169507   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:6e:80", ip: ""} in network mk-old-k8s-version-20210813205823-30853: {Iface:virbr5 ExpiryTime:2021-08-13 22:01:12 +0000 UTC Type:0 Mac:52:54:00:ed:6e:80 Iaid: IPaddr:192.168.83.49 Prefix:24 Hostname:old-k8s-version-20210813205823-30853 Clientid:01:52:54:00:ed:6e:80}
	I0813 21:07:31.169535   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | domain old-k8s-version-20210813205823-30853 has defined IP address 192.168.83.49 and MAC address 52:54:00:ed:6e:80 in network mk-old-k8s-version-20210813205823-30853
	I0813 21:07:31.169656   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHPort
	I0813 21:07:31.169820   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHKeyPath
	I0813 21:07:31.170004   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .GetSSHUsername
	I0813 21:07:31.170153   10272 sshutil.go:53] new ssh client: &{IP:192.168.83.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205823-30853/id_rsa Username:docker}
	I0813 21:07:31.334616   10272 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:07:31.339091   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:07:31.350144   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:07:31.350160   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:07:31.366866   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:07:31.366889   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:07:31.415434   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:07:31.415460   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:07:31.415813   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:07:31.439763   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:07:31.439787   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:07:31.551531   10272 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:07:31.551559   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:07:31.614721   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:07:31.614757   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:07:31.648730   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:07:31.686266   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:07:31.686288   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:07:31.766323   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:07:31.766354   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:07:32.021208   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:07:32.021232   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:07:32.128868   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:07:32.128914   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:07:32.396755   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:07:32.396784   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:07:32.629623   10272 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:07:32.629647   10272 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:07:32.876963   10272 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:07:33.170819   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.554610   10272 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.219955078s)
	I0813 21:07:33.554661   10272 start.go:728] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS
	I0813 21:07:33.554710   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.215586915s)
	I0813 21:07:33.554766   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.138920482s)
	I0813 21:07:33.554845   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.554810   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.554909   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.554882   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.555205   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.555224   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.555237   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.555251   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.555322   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.555339   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.555337   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.555352   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.555362   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.557880   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.557881   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.557894   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.557900   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.557931   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.557951   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:33.557969   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:33.558002   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:33.558255   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:33.558287   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:33.558297   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.417993   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.769219397s)
	I0813 21:07:34.418041   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.418055   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.419702   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.419703   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:34.419721   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.419735   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.419744   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.420013   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.420030   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.420042   10272 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210813205823-30853"
	I0813 21:07:34.719323   10272 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.842300346s)
	I0813 21:07:34.719378   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.719393   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.719692   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.719710   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.719720   10272 main.go:130] libmachine: Making call to close driver server
	I0813 21:07:34.719731   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) Calling .Close
	I0813 21:07:34.721171   10272 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:07:34.721190   10272 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:07:34.721177   10272 main.go:130] libmachine: (old-k8s-version-20210813205823-30853) DBG | Closing plugin on server side
	I0813 21:07:34.723692   10272 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 21:07:34.723719   10272 addons.go:344] enableAddons completed in 3.64184317s
	I0813 21:07:31.421963   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.916790   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.903029   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:36.402184   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:35.688121   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.171925   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:36.422423   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.916463   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.403153   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.903100   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.668346   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:42.668696   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:44.669555   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.922382   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:42.982831   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:45.413525   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:43.402566   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:45.905536   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:46.733235   10867 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.227754709s)
	I0813 21:07:46.733320   10867 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:07:46.749380   10867 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:07:46.749451   10867 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:07:46.789090   10867 cri.go:76] found id: ""
	I0813 21:07:46.789192   10867 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:07:46.797753   10867 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:07:46.805773   10867 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:07:46.805816   10867 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:07:47.366092   10867 out.go:204]   - Generating certificates and keys ...
	I0813 21:07:48.287070   10867 out.go:204]   - Booting up control plane ...
	I0813 21:07:46.669635   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:49.169303   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:47.414190   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:49.914581   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:48.403863   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:50.902452   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:51.170024   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:53.672034   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:52.419570   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:54.922828   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:53.400843   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:55.401813   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:56.169442   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:58.173990   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:00.180299   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:57.414460   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:59.414953   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:57.402188   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:59.407382   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:01.902586   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:02.672361   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:05.168918   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:04.917732   10867 out.go:204]   - Configuring RBAC rules ...
	I0813 21:08:05.478215   10867 cni.go:93] Creating CNI manager for ""
	I0813 21:08:05.478240   10867 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:08:01.415978   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:03.916377   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:03.903277   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:05.908821   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:05.480079   10867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:08:05.480166   10867 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:08:05.490836   10867 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:08:05.516775   10867 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:08:05.516826   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=embed-certs-20210813205917-30853 minikube.k8s.io/updated_at=2021_08_13T21_08_05_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:05.516826   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:05.571274   10867 ops.go:34] apiserver oom_adj: -16
	I0813 21:08:05.877007   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:06.498456   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:06.997686   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:07.498266   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:07.998377   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:08.498124   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:07.171495   10272 pod_ready.go:102] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.171976   10272 pod_ready.go:92] pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:08.172005   10272 pod_ready.go:81] duration metric: took 37.017483324s waiting for pod "coredns-fb8b8dccf-j78d5" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:08.172023   10272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xnqfc" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:08.178546   10272 pod_ready.go:92] pod "kube-proxy-xnqfc" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:08.178572   10272 pod_ready.go:81] duration metric: took 6.540181ms waiting for pod "kube-proxy-xnqfc" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:08.178582   10272 pod_ready.go:38] duration metric: took 37.035002251s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:08:08.178607   10272 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:08:08.178659   10272 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:08:08.193211   10272 api_server.go:70] duration metric: took 37.111356956s to wait for apiserver process to appear ...
	I0813 21:08:08.193234   10272 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:08:08.193245   10272 api_server.go:239] Checking apiserver healthz at https://192.168.83.49:8443/healthz ...
	I0813 21:08:08.200770   10272 api_server.go:265] https://192.168.83.49:8443/healthz returned 200:
	ok
	I0813 21:08:08.201945   10272 api_server.go:139] control plane version: v1.14.0
	I0813 21:08:08.201960   10272 api_server.go:129] duration metric: took 8.721341ms to wait for apiserver health ...
	I0813 21:08:08.201968   10272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:08:08.206023   10272 system_pods.go:59] 4 kube-system pods found
	I0813 21:08:08.206043   10272 system_pods.go:61] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.206047   10272 system_pods.go:61] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.206054   10272 system_pods.go:61] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.206058   10272 system_pods.go:61] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.206065   10272 system_pods.go:74] duration metric: took 4.091873ms to wait for pod list to return data ...
	I0813 21:08:08.206072   10272 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:08:08.209997   10272 default_sa.go:45] found service account: "default"
	I0813 21:08:08.210015   10272 default_sa.go:55] duration metric: took 3.938001ms for default service account to be created ...
	I0813 21:08:08.210022   10272 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:08:08.214317   10272 system_pods.go:86] 4 kube-system pods found
	I0813 21:08:08.214336   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.214341   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.214348   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.214354   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.214373   10272 retry.go:31] will retry after 214.282984ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:08.433733   10272 system_pods.go:86] 4 kube-system pods found
	I0813 21:08:08.433762   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.433770   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.433781   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.433788   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.433807   10272 retry.go:31] will retry after 293.852686ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:08.735301   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:08.735333   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.735341   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.735350   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:08.735360   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:08.735366   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:08.735412   10272 retry.go:31] will retry after 355.089487ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:09.097711   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:09.097745   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.097753   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.097758   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:09.097765   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:09.097770   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.097788   10272 retry.go:31] will retry after 480.685997ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:09.584281   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:09.584311   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.584317   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.584321   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:09.584329   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:09.584333   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:09.584352   10272 retry.go:31] will retry after 544.138839ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:10.134667   10272 system_pods.go:86] 5 kube-system pods found
	I0813 21:08:10.134694   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.134701   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.134706   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:10.134712   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:10.134716   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.134738   10272 retry.go:31] will retry after 684.014074ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:05.922361   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.419726   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.401315   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:10.909126   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.998041   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:09.498515   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:09.998297   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:10.498018   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:10.997716   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:11.497679   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:11.998238   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:12.498701   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:12.997887   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:13.498358   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:10.825951   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:10.825981   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.825987   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:10.825991   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.825995   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:10.826001   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:10.826006   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:10.826027   10272 retry.go:31] will retry after 1.039068543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:11.871229   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:11.871263   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:11.871270   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:11.871274   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:11.871279   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:11.871292   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:11.871300   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:11.871321   10272 retry.go:31] will retry after 1.02433744s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 21:08:12.905014   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:12.905044   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905052   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905058   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905065   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905075   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:12.905081   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:12.905105   10272 retry.go:31] will retry after 1.268973106s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:14.179146   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:14.179173   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179179   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179183   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179188   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179195   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:14.179202   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:14.179223   10272 retry.go:31] will retry after 1.733071555s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:10.914496   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:12.924919   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:15.415784   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:13.401246   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:15.408120   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:13.997632   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:14.497943   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:14.998249   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:15.498543   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:15.998283   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:16.497729   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:16.997873   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:17.497972   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:17.997958   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:18.497761   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:18.997883   10867 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:08:19.220539   10867 kubeadm.go:985] duration metric: took 13.703767036s to wait for elevateKubeSystemPrivileges.
	I0813 21:08:19.220607   10867 kubeadm.go:392] StartCluster complete in 6m5.865041156s
	I0813 21:08:19.220635   10867 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:08:19.220787   10867 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:08:19.223909   10867 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:08:19.752954   10867 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210813205917-30853" rescaled to 1
	I0813 21:08:19.753018   10867 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:08:19.754708   10867 out.go:177] * Verifying Kubernetes components...
	I0813 21:08:19.754778   10867 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:08:19.753082   10867 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:08:19.753107   10867 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:08:19.753299   10867 config.go:177] Loaded profile config "embed-certs-20210813205917-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:08:19.754891   10867 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754904   10867 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754933   10867 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210813205917-30853"
	I0813 21:08:19.754932   10867 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754940   10867 addons.go:59] Setting dashboard=true in profile "embed-certs-20210813205917-30853"
	I0813 21:08:19.754970   10867 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210813205917-30853"
	I0813 21:08:19.754974   10867 addons.go:135] Setting addon dashboard=true in "embed-certs-20210813205917-30853"
	W0813 21:08:19.754988   10867 addons.go:147] addon dashboard should already be in state true
	W0813 21:08:19.754987   10867 addons.go:147] addon metrics-server should already be in state true
	I0813 21:08:19.755026   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.754914   10867 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210813205917-30853"
	W0813 21:08:19.755116   10867 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:08:19.755134   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.755026   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.755462   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755511   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.755539   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755462   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755571   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.755606   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.755637   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.755686   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.770580   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42725
	I0813 21:08:19.771121   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.771377   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33335
	I0813 21:08:19.771830   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.771853   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.771954   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.772247   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.772723   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.772739   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.772901   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0813 21:08:19.773026   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.773068   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.773413   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.773902   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.773924   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.774397   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.774463   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.774563   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.775023   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.775063   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.784550   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33743
	I0813 21:08:19.784959   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.785506   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.785522   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.785894   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.786493   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.786525   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.787205   10867 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210813205917-30853"
	W0813 21:08:19.787228   10867 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:08:19.787259   10867 host.go:66] Checking if "embed-certs-20210813205917-30853" exists ...
	I0813 21:08:19.787583   10867 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210813205917-30853" to be "Ready" ...
	I0813 21:08:19.787674   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.787718   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.787787   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0813 21:08:19.787910   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0813 21:08:19.788204   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.789084   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.789106   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.789211   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.789825   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.789931   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.789953   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.790005   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.790276   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.790437   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.794978   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.794986   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.797284   10867 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:08:19.798757   10867 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:08:19.797345   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:08:19.798798   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:08:19.798822   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.800334   10867 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:08:19.800389   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:08:19.800399   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:08:19.799838   10867 node_ready.go:49] node "embed-certs-20210813205917-30853" has status "Ready":"True"
	I0813 21:08:19.800420   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.800422   10867 node_ready.go:38] duration metric: took 12.815275ms waiting for node "embed-certs-20210813205917-30853" to be "Ready" ...
	I0813 21:08:19.800442   10867 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:08:19.802028   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35565
	I0813 21:08:19.802460   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.802983   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.803025   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.803483   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.803731   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.809104   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.809531   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.809654   10867 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:15.917751   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:15.917783   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917792   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917799   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917805   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917816   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:15.917823   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:15.917844   10272 retry.go:31] will retry after 2.410580953s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:18.337846   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:18.337883   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337892   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337898   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337905   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337916   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:18.337923   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:18.337944   10272 retry.go:31] will retry after 3.437877504s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:17.916739   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:20.415225   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:17.901469   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:19.902763   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:21.903648   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:19.811430   10867 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:08:19.810007   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.811541   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.811578   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.811581   10867 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:08:19.810168   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.810293   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.810559   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.811047   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36555
	I0813 21:08:19.811649   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.811674   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:08:19.811689   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.811908   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.811910   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.812038   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.812038   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.812443   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.812464   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:19.812475   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:19.813065   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.813083   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.813470   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.814035   10867 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:08:19.814070   10867 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:08:19.818289   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.818751   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.818811   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.818838   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.818903   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.819054   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.819209   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:19.825837   10867 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0813 21:08:19.826199   10867 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:08:19.826605   10867 main.go:130] libmachine: Using API Version  1
	I0813 21:08:19.826624   10867 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:08:19.826952   10867 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:08:19.827127   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetState
	I0813 21:08:19.830318   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .DriverName
	I0813 21:08:19.830538   10867 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:08:19.830553   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:08:19.830570   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHHostname
	I0813 21:08:19.835761   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.836143   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:fe:b8", ip: ""} in network mk-embed-certs-20210813205917-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:01:49 +0000 UTC Type:0 Mac:52:54:00:cb:fe:b8 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-20210813205917-30853 Clientid:01:52:54:00:cb:fe:b8}
	I0813 21:08:19.836172   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | domain embed-certs-20210813205917-30853 has defined IP address 192.168.39.156 and MAC address 52:54:00:cb:fe:b8 in network mk-embed-certs-20210813205917-30853
	I0813 21:08:19.836286   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHPort
	I0813 21:08:19.836451   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHKeyPath
	I0813 21:08:19.836602   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .GetSSHUsername
	I0813 21:08:19.836724   10867 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813205917-30853/id_rsa Username:docker}
	I0813 21:08:20.037292   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:08:20.037321   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:08:20.099263   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:08:20.099292   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:08:20.117736   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:08:20.146467   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:08:20.146494   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:08:20.148636   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:08:20.180430   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:08:20.180464   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:08:20.300161   10867 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:08:20.301107   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:08:20.301131   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:08:20.311540   10867 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:08:20.311565   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:08:20.390587   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:08:20.390623   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:08:20.411556   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:08:20.513347   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:08:20.513381   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:08:20.562665   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:08:20.562692   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:08:20.637151   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:08:20.637186   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:08:20.697238   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:08:20.697266   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:08:20.722593   10867 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:08:20.722622   10867 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:08:20.888939   10867 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:08:21.832691   10867 pod_ready.go:102] pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:22.499631   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.381850453s)
	I0813 21:08:22.499694   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.499708   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.499992   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.500011   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.500021   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.500031   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.500251   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.500299   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.500317   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.500327   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.500578   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.500587   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | Closing plugin on server side
	I0813 21:08:22.500601   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.607350   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.458674806s)
	I0813 21:08:22.607409   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.607423   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.607684   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.607702   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.607713   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:22.607728   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:22.607970   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:22.607987   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:22.671948   10867 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.371722218s)
	I0813 21:08:22.671991   10867 start.go:728] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 21:08:23.212733   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.801121223s)
	I0813 21:08:23.212785   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:23.212801   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:23.213078   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | Closing plugin on server side
	I0813 21:08:23.213122   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:23.213131   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:23.213147   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:23.213162   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:23.213417   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) DBG | Closing plugin on server side
	I0813 21:08:23.213454   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:23.213463   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:23.213476   10867 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210813205917-30853"
	I0813 21:08:23.973313   10867 pod_ready.go:102] pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.127694   10867 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.238655669s)
	I0813 21:08:24.127768   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:24.127783   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:24.128088   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:24.128134   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:24.128152   10867 main.go:130] libmachine: Making call to close driver server
	I0813 21:08:24.128162   10867 main.go:130] libmachine: (embed-certs-20210813205917-30853) Calling .Close
	I0813 21:08:24.128402   10867 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:08:24.128416   10867 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:08:21.783186   10272 system_pods.go:86] 6 kube-system pods found
	I0813 21:08:21.783216   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783222   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783226   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783231   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783238   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:21.783242   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:21.783260   10272 retry.go:31] will retry after 3.261655801s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:25.051995   10272 system_pods.go:86] 7 kube-system pods found
	I0813 21:08:25.052028   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052037   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052051   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:25.052058   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052065   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052076   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:25.052086   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:25.052104   10272 retry.go:31] will retry after 4.086092664s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:22.421981   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.915565   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:23.903699   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:25.903987   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.130282   10867 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 21:08:24.130308   10867 addons.go:344] enableAddons completed in 4.377209962s
	I0813 21:08:26.342246   10867 pod_ready.go:92] pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:26.342272   10867 pod_ready.go:81] duration metric: took 6.532595189s waiting for pod "coredns-558bd4d5db-8bmrm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:26.342282   10867 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:28.367486   10867 pod_ready.go:102] pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:29.149965   10272 system_pods.go:86] 7 kube-system pods found
	I0813 21:08:29.149997   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150006   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150013   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:29.150019   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150025   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150035   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:29.150043   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:29.150063   10272 retry.go:31] will retry after 6.402197611s: missing components: kube-apiserver, kube-controller-manager
	I0813 21:08:26.928284   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:29.416662   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:28.403505   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:30.906239   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:30.367630   10867 pod_ready.go:102] pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:31.386002   10867 pod_ready.go:97] error getting pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-zdlnb" not found
	I0813 21:08:31.386040   10867 pod_ready.go:81] duration metric: took 5.043748322s waiting for pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace to be "Ready" ...
	E0813 21:08:31.386053   10867 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-zdlnb" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-zdlnb" not found
	I0813 21:08:31.386063   10867 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.395413   10867 pod_ready.go:92] pod "etcd-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.395442   10867 pod_ready.go:81] duration metric: took 9.37037ms waiting for pod "etcd-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.395456   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.407839   10867 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.407860   10867 pod_ready.go:81] duration metric: took 12.39509ms waiting for pod "kube-apiserver-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.407872   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.413811   10867 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.413832   10867 pod_ready.go:81] duration metric: took 5.950273ms waiting for pod "kube-controller-manager-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.413845   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-szvqm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.422794   10867 pod_ready.go:92] pod "kube-proxy-szvqm" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.422819   10867 pod_ready.go:81] duration metric: took 8.966458ms waiting for pod "kube-proxy-szvqm" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.422831   10867 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.564060   10867 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813205917-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:31.564136   10867 pod_ready.go:81] duration metric: took 141.29321ms waiting for pod "kube-scheduler-embed-certs-20210813205917-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:31.564168   10867 pod_ready.go:38] duration metric: took 11.763707327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:08:31.564208   10867 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:08:31.564290   10867 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:08:31.578890   10867 api_server.go:70] duration metric: took 11.8258395s to wait for apiserver process to appear ...
	I0813 21:08:31.578919   10867 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:08:31.578932   10867 api_server.go:239] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0813 21:08:31.585647   10867 api_server.go:265] https://192.168.39.156:8443/healthz returned 200:
	ok
	I0813 21:08:31.586833   10867 api_server.go:139] control plane version: v1.21.3
	I0813 21:08:31.586868   10867 api_server.go:129] duration metric: took 7.925906ms to wait for apiserver health ...
	I0813 21:08:31.586879   10867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:08:31.766375   10867 system_pods.go:59] 8 kube-system pods found
	I0813 21:08:31.766406   10867 system_pods.go:61] "coredns-558bd4d5db-8bmrm" [23a5740e-bd96-4bd0-851e-4abc81b7ddff] Running
	I0813 21:08:31.766412   10867 system_pods.go:61] "etcd-embed-certs-20210813205917-30853" [7061779a-83ef-4ed4-9512-ec936a2d94d1] Running
	I0813 21:08:31.766416   10867 system_pods.go:61] "kube-apiserver-embed-certs-20210813205917-30853" [796645fb-0142-415b-96c2-9b640f680514] Running
	I0813 21:08:31.766421   10867 system_pods.go:61] "kube-controller-manager-embed-certs-20210813205917-30853" [d17159ee-4ac6-4f2a-aaad-cd3af7317e02] Running
	I0813 21:08:31.766424   10867 system_pods.go:61] "kube-proxy-szvqm" [d116fa9a-0229-40cf-ae60-5d89fb7716f1] Running
	I0813 21:08:31.766428   10867 system_pods.go:61] "kube-scheduler-embed-certs-20210813205917-30853" [b888e2ad-9504-4e54-8156-8d30bb432d37] Running
	I0813 21:08:31.766436   10867 system_pods.go:61] "metrics-server-7c784ccb57-qc7sb" [43aa1ab2-5284-4d76-b826-12fd50a0ba54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:31.766440   10867 system_pods.go:61] "storage-provisioner" [f70d6e8f-2aca-49ac-913a-73ddf71ae6ee] Running
	I0813 21:08:31.766447   10867 system_pods.go:74] duration metric: took 179.562479ms to wait for pod list to return data ...
	I0813 21:08:31.766456   10867 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:08:31.964873   10867 default_sa.go:45] found service account: "default"
	I0813 21:08:31.964899   10867 default_sa.go:55] duration metric: took 198.43488ms for default service account to be created ...
	I0813 21:08:31.964911   10867 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:08:32.168305   10867 system_pods.go:86] 8 kube-system pods found
	I0813 21:08:32.168349   10867 system_pods.go:89] "coredns-558bd4d5db-8bmrm" [23a5740e-bd96-4bd0-851e-4abc81b7ddff] Running
	I0813 21:08:32.168359   10867 system_pods.go:89] "etcd-embed-certs-20210813205917-30853" [7061779a-83ef-4ed4-9512-ec936a2d94d1] Running
	I0813 21:08:32.168369   10867 system_pods.go:89] "kube-apiserver-embed-certs-20210813205917-30853" [796645fb-0142-415b-96c2-9b640f680514] Running
	I0813 21:08:32.168377   10867 system_pods.go:89] "kube-controller-manager-embed-certs-20210813205917-30853" [d17159ee-4ac6-4f2a-aaad-cd3af7317e02] Running
	I0813 21:08:32.168384   10867 system_pods.go:89] "kube-proxy-szvqm" [d116fa9a-0229-40cf-ae60-5d89fb7716f1] Running
	I0813 21:08:32.168390   10867 system_pods.go:89] "kube-scheduler-embed-certs-20210813205917-30853" [b888e2ad-9504-4e54-8156-8d30bb432d37] Running
	I0813 21:08:32.168402   10867 system_pods.go:89] "metrics-server-7c784ccb57-qc7sb" [43aa1ab2-5284-4d76-b826-12fd50a0ba54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:32.168412   10867 system_pods.go:89] "storage-provisioner" [f70d6e8f-2aca-49ac-913a-73ddf71ae6ee] Running
	I0813 21:08:32.168423   10867 system_pods.go:126] duration metric: took 203.506299ms to wait for k8s-apps to be running ...
	I0813 21:08:32.168436   10867 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:08:32.168487   10867 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:08:32.183556   10867 system_svc.go:56] duration metric: took 15.110742ms WaitForService to wait for kubelet.
	I0813 21:08:32.183585   10867 kubeadm.go:547] duration metric: took 12.430541017s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:08:32.183611   10867 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:08:32.366938   10867 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:08:32.366970   10867 node_conditions.go:123] node cpu capacity is 2
	I0813 21:08:32.366989   10867 node_conditions.go:105] duration metric: took 183.372537ms to run NodePressure ...
	I0813 21:08:32.367004   10867 start.go:231] waiting for startup goroutines ...
	I0813 21:08:32.428402   10867 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 21:08:32.430754   10867 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210813205917-30853" cluster and "default" namespace by default
	I0813 21:08:31.925048   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:34.421689   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:33.402937   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:35.404185   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:35.559235   10272 system_pods.go:86] 7 kube-system pods found
	I0813 21:08:35.559264   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559272   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559278   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559284   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559289   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559299   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:35.559305   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:35.559325   10272 retry.go:31] will retry after 6.062999549s: missing components: kube-controller-manager
	I0813 21:08:36.917628   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:39.412918   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:37.902004   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:40.400508   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:41.627792   10272 system_pods.go:86] 8 kube-system pods found
	I0813 21:08:41.627828   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627837   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627844   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627851   10272 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205823-30853" [9f80b2c3-fc7a-11eb-b132-525400ed6e80] Pending
	I0813 21:08:41.627857   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627863   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627874   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:41.627882   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:41.627906   10272 retry.go:31] will retry after 10.504197539s: missing components: kube-controller-manager
	I0813 21:08:41.415467   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:43.418679   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:45.419622   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:42.401588   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:44.413733   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:46.903773   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:47.914837   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:50.413949   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:52.140470   10272 system_pods.go:86] 8 kube-system pods found
	I0813 21:08:52.140498   10272 system_pods.go:89] "coredns-fb8b8dccf-j78d5" [7887fb24-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140503   10272 system_pods.go:89] "etcd-old-k8s-version-20210813205823-30853" [909a365b-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140508   10272 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205823-30853" [97c12c90-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140512   10272 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205823-30853" [9f80b2c3-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140516   10272 system_pods.go:89] "kube-proxy-xnqfc" [78b26ce7-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140520   10272 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205823-30853" [8f6a91a4-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140526   10272 system_pods.go:89] "metrics-server-8546d8b77b-mm6vs" [7b5aec25-fc7a-11eb-b132-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:08:52.140531   10272 system_pods.go:89] "storage-provisioner" [7a581997-fc7a-11eb-b132-525400ed6e80] Running
	I0813 21:08:52.140549   10272 system_pods.go:126] duration metric: took 43.930520866s to wait for k8s-apps to be running ...
	I0813 21:08:52.140578   10272 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:08:52.140627   10272 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:08:52.153255   10272 system_svc.go:56] duration metric: took 12.668182ms WaitForService to wait for kubelet.
	I0813 21:08:52.153279   10272 kubeadm.go:547] duration metric: took 1m21.071431976s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:08:52.153300   10272 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:08:52.156915   10272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:08:52.156939   10272 node_conditions.go:123] node cpu capacity is 2
	I0813 21:08:52.156953   10272 node_conditions.go:105] duration metric: took 3.648615ms to run NodePressure ...
	I0813 21:08:52.156962   10272 start.go:231] waiting for startup goroutines ...
	I0813 21:08:52.202043   10272 start.go:462] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I0813 21:08:52.204217   10272 out.go:177] 
	W0813 21:08:52.204388   10272 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I0813 21:08:52.206057   10272 out.go:177]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:08:52.207407   10272 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20210813205823-30853" cluster and "default" namespace by default
	I0813 21:08:48.904448   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:51.401687   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:52.414001   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:54.916108   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:53.903280   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:56.402202   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:56.918707   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:59.414767   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:58.402828   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:00.404574   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:01.415921   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:03.415961   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:05.418118   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:02.902981   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:05.407750   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 21:01:11 UTC, end at Fri 2021-08-13 21:09:08 UTC. --
	Aug 13 21:09:06 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:06.979161155Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,StartedAt:1628888855276809576,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/7a581997-fc7a-11eb-b132-525400ed6e80/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/7a581997-fc7a-11eb-b132-525400ed6e80/containers/storage-provisioner/2d1c69eb,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/7a581997-fc7a-11eb-b132-525400ed6e80/volumes/kubernetes.io~secret/storage-provisioner-token-28nxs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_7a581997-fc7a-11eb-b132-525400ed
6e80/storage-provisioner/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=37fb7d6d-95d6-4c25-bc42-a58b77a51596 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.200335493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=24d2514b-08c4-42b5-a2d8-2d9795114da9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.200493234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=24d2514b-08c4-42b5-a2d8-2d9795114da9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.200922236Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=24d2514b-08c4-42b5-a2d8-2d9795114da9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.240970897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e2d29cf6-c790-49b8-be53-58a90c0d9bdc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.241030926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e2d29cf6-c790-49b8-be53-58a90c0d9bdc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.241240250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e2d29cf6-c790-49b8-be53-58a90c0d9bdc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.281208752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=056aec13-345c-4c46-8c43-f0c4c36af067 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.281354357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=056aec13-345c-4c46-8c43-f0c4c36af067 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.281795836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=056aec13-345c-4c46-8c43-f0c4c36af067 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.329846904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2130ed0b-4692-4355-8cea-c035f12bffd9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.329928866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2130ed0b-4692-4355-8cea-c035f12bffd9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.330194302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2130ed0b-4692-4355-8cea-c035f12bffd9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.373131103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c44bd5c7-169f-4660-a4fe-5e609dadd1b4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.373189582Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c44bd5c7-169f-4660-a4fe-5e609dadd1b4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.374089469Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c44bd5c7-169f-4660-a4fe-5e609dadd1b4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.429918807Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=75a599e1-4490-4e98-bc48-5aa6c2a2790c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.430086451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=75a599e1-4490-4e98-bc48-5aa6c2a2790c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.431317226Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=75a599e1-4490-4e98-bc48-5aa6c2a2790c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.485562334Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9633a47e-5e00-4722-8d33-bffd80c0cc22 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.485798278Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9633a47e-5e00-4722-8d33-bffd80c0cc22 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.486066976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9633a47e-5e00-4722-8d33-bffd80c0cc22 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.528264935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2cb45470-0102-4bf9-8d82-0fe6e5be98b7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.528407336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2cb45470-0102-4bf9-8d82-0fe6e5be98b7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:09:08 old-k8s-version-20210813205823-30853 crio[2041]: time="2021-08-13 21:09:08.528609206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdedf4f5fea5219d9d34381090a0ba96c6abc6cc7be4ec3c61328856e5d84d90,PodSandboxId:d95d21dd1243cc20c27034e3aa2493b3f2a434d699c5db1866d1872de3661fe8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628888904934265625,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5b494cc544-2vltn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba32335-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 3487aa72,io
.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_RUNNING,CreatedAt:1628888883660149039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.k
ubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6,PodSandboxId:cc0cd5379a478bb2e6f832965df365993c79185381ab434aec017605803350f8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628888856652997709,Labels:map[string]string{io.kubernete
s.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5d8978d65d-264rf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7ba3c406-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 7d32bb93,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6,PodSandboxId:91b18347830f0035b48d96fe3a8e3afe656dcc26b9da6950ba5609b0499ddebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628888855212366429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a581997-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5395c3c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49,PodSandboxId:18dceec12c1d906cc372bcf546694b6593ebf49683dfc7769d30f92b76e58442,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4,State:CONTAINER_EXI
TED,CreatedAt:1628888852767009742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-fb8b8dccf-j78d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7887fb24-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: 64a6f0e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db,PodSandboxId:fe909e4ddd0b0aea9bd365dc5c520b7fbebe93b9db18609370ac5c8024324af7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5cd54e388abafbc4e1feb1050d139d71
8e5544494ffa55118141d6cbe4681e9d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:a704064100b363856afa4cee160a51948b9ac49bbc34ba97caeb7928055e9de1,State:CONTAINER_RUNNING,CreatedAt:1628888851814073077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnqfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78b26ce7-fc7a-11eb-b132-525400ed6e80,},Annotations:map[string]string{io.kubernetes.container.hash: a5f81478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e,PodSandboxId:f6f45c2d9b6a548693623937ec0bdc446be4463254d26db13cb740e1960ea30c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Annotations:m
ap[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7,State:CONTAINER_RUNNING,CreatedAt:1628888825778824830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f353d9ba98cec5c012ec9e36582c12,},Annotations:map[string]string{io.kubernetes.container.hash: 7388f6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770,PodSandboxId:a4702c1a10590b40d060c2c8abc03bf264711eecec9b4be3574360096b2edf26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150,Annotations:map[string]string{}
,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:09c62c11cdfe8dc43e0314174271ca434329c7991d6db5ef7c41a95da399cbf8,State:CONTAINER_RUNNING,CreatedAt:1628888824515208833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42c3e5fa8e81a5a78a3a372f8953126,},Annotations:map[string]string{io.kubernetes.container.hash: cefb4d9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8,PodSandboxId:bf3fb59533f254ee413d7bd72bf074b2c6b55a17b777192fe38c7411d5579c14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6,A
nnotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:5a5183b427e2e4226a3a7411064ee1b9dae5199513f2d7569b5e264a7be0fd06,State:CONTAINER_RUNNING,CreatedAt:1628888824183115531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49870ad58ac58df4b1f0ff4f471c50ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6075b00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3,PodSandboxId:2c658baa473ddf6eda1cb6cddc9a7983d3b20151de4b92d9e0e47a14ef7c9856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a449
2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:0484d3f811282a124e60a48de8f19f91913bac4d0ba0805d2ed259ea3b691a5e,State:CONTAINER_RUNNING,CreatedAt:1628888824015093034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-20210813205823-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba371a1cc55ef6aa89a1ba4554611582,},Annotations:map[string]string{io.kubernetes.container.hash: 4aa69ed7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2cb45470-0102-4bf9-8d82-0fe6e5be98b7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                        ATTEMPT             POD ID
	fdedf4f5fea52       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   43 seconds ago       Exited              dashboard-metrics-scraper   3                   d95d21dd1243c
	c0825e30b45e4       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c   About a minute ago   Running             coredns                     1                   18dceec12c1d9
	5b4555812b0f6       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   About a minute ago   Running             kubernetes-dashboard        0                   cc0cd5379a478
	0fdc2c1dd8463       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Running             storage-provisioner         0                   91b18347830f0
	0c5ce365d0f35       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c   About a minute ago   Exited              coredns                     0                   18dceec12c1d9
	8ee4160af1974       5cd54e388abafbc4e1feb1050d139d718e5544494ffa55118141d6cbe4681e9d   About a minute ago   Running             kube-proxy                  0                   fe909e4ddd0b0
	974c6dadfe125       2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d   2 minutes ago        Running             etcd                        0                   f6f45c2d9b6a5
	4cfcbd86d9955       b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150   2 minutes ago        Running             kube-controller-manager     0                   a4702c1a10590
	8ba6263efe7a5       ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6   2 minutes ago        Running             kube-apiserver              0                   bf3fb59533f25
	02c918cf1c5c4       00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a4492   2 minutes ago        Running             kube-scheduler              0                   2c658baa473dd
	
	* 
	* ==> coredns [0c5ce365d0f3538ac9746dd85ed6498b92bf5390e278afd72286de69f51e5e49] <==
	* .:53
	2021-08-13T21:07:37.978Z [INFO] CoreDNS-1.3.1
	2021-08-13T21:07:37.978Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T21:07:37.978Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
	E0813 21:08:02.979044       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0813 21:08:02.979044       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	log: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-fb8b8dccf-j78d5.unknownuser.log.ERROR.20210813-210802.1: no such file or directory
	
	* 
	* ==> coredns [c0825e30b45e4ae2a6814680a1dc61cbf152c9273234c62cbcc1d00446b1f5b4] <==
	* .:53
	2021-08-13T21:08:03.872Z [INFO] CoreDNS-1.3.1
	2021-08-13T21:08:03.872Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T21:08:03.872Z [INFO] plugin/reload: Running configuration MD5 = 6c0e799ff6797682aae95e2097dfc0d9
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210813205823-30853
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20210813205823-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=old-k8s-version-20210813205823-30853
	                    minikube.k8s.io/updated_at=2021_08_13T21_07_15_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 21:07:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 21:08:10 +0000   Fri, 13 Aug 2021 21:07:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 21:08:10 +0000   Fri, 13 Aug 2021 21:07:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 21:08:10 +0000   Fri, 13 Aug 2021 21:07:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 21:08:10 +0000   Fri, 13 Aug 2021 21:07:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.49
	  Hostname:    old-k8s-version-20210813205823-30853
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2186320Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2186320Ki
	 pods:               110
	System Info:
	 Machine ID:                 65adc67ea807433696e3e7757ea3c00d
	 System UUID:                65adc67e-a807-4336-96e3-e7757ea3c00d
	 Boot ID:                    827b3c62-a4f5-4410-bca7-56b86fb51480
	 Kernel Version:             4.19.182
	 OS Image:                   Buildroot 2020.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.20.2
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                            ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-fb8b8dccf-j78d5                                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     98s
	  kube-system                etcd-old-k8s-version-20210813205823-30853                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                kube-apiserver-old-k8s-version-20210813205823-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                kube-controller-manager-old-k8s-version-20210813205823-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                kube-proxy-xnqfc                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                kube-scheduler-old-k8s-version-20210813205823-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                metrics-server-8546d8b77b-mm6vs                                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (14%!)(MISSING)      0 (0%!)(MISSING)         93s
	  kube-system                storage-provisioner                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-2vltn                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-264rf                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                              Message
	  ----    ------                   ----                 ----                                              -------
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet, old-k8s-version-20210813205823-30853     Node old-k8s-version-20210813205823-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x7 over 2m6s)  kubelet, old-k8s-version-20210813205823-30853     Node old-k8s-version-20210813205823-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet, old-k8s-version-20210813205823-30853     Node old-k8s-version-20210813205823-30853 status is now: NodeHasSufficientPID
	  Normal  Starting                 96s                  kube-proxy, old-k8s-version-20210813205823-30853  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +3.794724] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.034270] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.085040] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1719 comm=systemd-network
	[  +0.654997] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.346746] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006456] vboxguest: PCI device not found, probably running on physical hardware.
	[ +16.906913] systemd-fstab-generator[2136]: Ignoring "noauto" for root device
	[  +1.901248] systemd-fstab-generator[2149]: Ignoring "noauto" for root device
	[  +0.287647] systemd-fstab-generator[2175]: Ignoring "noauto" for root device
	[  +5.784701] systemd-fstab-generator[2362]: Ignoring "noauto" for root device
	[ +14.150103] kauditd_printk_skb: 20 callbacks suppressed
	[Aug13 21:02] kauditd_printk_skb: 104 callbacks suppressed
	[  +6.217917] kauditd_printk_skb: 26 callbacks suppressed
	[Aug13 21:03] NFSD: Unable to end grace period: -110
	[Aug13 21:06] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.312838] kauditd_printk_skb: 44 callbacks suppressed
	[Aug13 21:07] systemd-fstab-generator[5828]: Ignoring "noauto" for root device
	[ +14.125056] tee (6221): /proc/6029/oom_adj is deprecated, please use /proc/6029/oom_score_adj instead.
	[ +16.773994] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.419768] kauditd_printk_skb: 134 callbacks suppressed
	[Aug13 21:08] kauditd_printk_skb: 2 callbacks suppressed
	[Aug13 21:09] systemd-fstab-generator[8008]: Ignoring "noauto" for root device
	[  +0.834273] systemd-fstab-generator[8062]: Ignoring "noauto" for root device
	[  +1.008554] systemd-fstab-generator[8114]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [974c6dadfe1254845bf4a67a940579904bb4e1e5304fdffcba462c009427935e] <==
	* 2021-08-13 21:07:05.866606 I | raft: newRaft f0eab59e12edad64 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2021-08-13 21:07:05.866717 I | raft: f0eab59e12edad64 became follower at term 1
	2021-08-13 21:07:05.876038 W | auth: simple token is not cryptographically signed
	2021-08-13 21:07:05.881029 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
	2021-08-13 21:07:05.882624 I | etcdserver: f0eab59e12edad64 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2021-08-13 21:07:05.883767 I | etcdserver/membership: added member f0eab59e12edad64 [https://192.168.83.49:2380] to cluster 42a6ff8259927986
	2021-08-13 21:07:05.884284 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 21:07:05.884485 I | embed: listening for metrics on http://192.168.83.49:2381
	2021-08-13 21:07:05.884967 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 21:07:05.967596 I | raft: f0eab59e12edad64 is starting a new election at term 1
	2021-08-13 21:07:05.967919 I | raft: f0eab59e12edad64 became candidate at term 2
	2021-08-13 21:07:05.968094 I | raft: f0eab59e12edad64 received MsgVoteResp from f0eab59e12edad64 at term 2
	2021-08-13 21:07:05.968113 I | raft: f0eab59e12edad64 became leader at term 2
	2021-08-13 21:07:05.968526 I | raft: raft.node: f0eab59e12edad64 elected leader f0eab59e12edad64 at term 2
	2021-08-13 21:07:05.969917 I | etcdserver: published {Name:old-k8s-version-20210813205823-30853 ClientURLs:[https://192.168.83.49:2379]} to cluster 42a6ff8259927986
	2021-08-13 21:07:05.970382 I | embed: ready to serve client requests
	2021-08-13 21:07:05.970992 I | embed: ready to serve client requests
	2021-08-13 21:07:05.972962 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 21:07:05.973600 I | etcdserver: setting up the initial cluster version to 3.3
	2021-08-13 21:07:05.975344 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-08-13 21:07:05.975529 I | etcdserver/api: enabled capabilities for version 3.3
	2021-08-13 21:07:05.976100 I | embed: serving client requests on 192.168.83.49:2379
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	2021-08-13 21:07:39.655065 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (133.924751ms) to execute
	
	* 
	* ==> kernel <==
	*  21:09:08 up 8 min,  0 users,  load average: 1.71, 1.07, 0.53
	Linux old-k8s-version-20210813205823-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [8ba6263efe7a5beab424fa6b96a1920abbb8a249cbf8e9d059cbea317bfc31f8] <==
	* I0813 21:08:56.611726       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:08:57.612225       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:08:57.612330       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:08:58.612601       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:08:58.612780       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:08:59.613098       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:08:59.613454       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:00.613969       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:00.614273       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:01.614611       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:01.614870       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:02.615292       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:02.615478       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:03.615978       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:03.616396       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:04.616853       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:04.617203       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:05.617361       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:05.617566       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:06.617983       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:06.618169       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:07.618414       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:07.618958       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:09:08.619044       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:09:08.619216       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-controller-manager [4cfcbd86d99551b85cf6a7b482d72471f3b34c034e804afafe13d24141267770] <==
	* E0813 21:07:34.348237       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.348975       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"7ad85e48-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.364805       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.408369       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.409030       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.409290       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"7ae482a9-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"428", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.409507       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"7ad85e48-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.459081       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.459420       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"7ad85e48-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.459604       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.459733       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"7ae482a9-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"428", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.484871       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.484999       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"7ad85e48-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.504044       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"7ae482a9-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"428", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.504060       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:07:34.512472       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:34.512556       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"7ae482a9-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"428", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:07:35.074846       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"7a88a60e-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-mm6vs
	I0813 21:07:35.547378       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"7ad85e48-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-2vltn
	I0813 21:07:35.581263       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"7ae482a9-fc7a-11eb-b132-525400ed6e80", APIVersion:"apps/v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-264rf
	E0813 21:08:00.263328       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 21:08:02.818481       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 21:08:30.516077       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 21:08:34.820909       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 21:09:00.769085       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [8ee4160af1974107a9b22671318b4dc916936905c07f261d99ba8531015727db] <==
	* W0813 21:07:32.286920       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0813 21:07:32.377114       1 server_others.go:148] Using iptables Proxier.
	I0813 21:07:32.377573       1 server_others.go:178] Tearing down inactive rules.
	E0813 21:07:32.555467       1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
	I0813 21:07:32.807550       1 server.go:555] Version: v1.14.0
	I0813 21:07:32.835831       1 config.go:202] Starting service config controller
	I0813 21:07:32.835959       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0813 21:07:32.836080       1 config.go:102] Starting endpoints config controller
	I0813 21:07:32.836097       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0813 21:07:32.940892       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	I0813 21:07:32.941257       1 controller_utils.go:1034] Caches are synced for service config controller
	
	* 
	* ==> kube-scheduler [02c918cf1c5c4e4629ed3516550721b15f737e687fc7cf6dbc68cebb334bf5d3] <==
	* W0813 21:07:05.469817       1 authentication.go:55] Authentication is disabled
	I0813 21:07:05.469890       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0813 21:07:05.470320       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0813 21:07:10.098259       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:07:10.109142       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:07:10.111429       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:07:10.111795       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:07:10.113885       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:07:10.114379       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:07:10.116354       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:07:10.116605       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:07:10.117913       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:07:10.122989       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:07:11.100451       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:07:11.114270       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:07:11.120553       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:07:11.125479       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:07:11.127101       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:07:11.131833       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:07:11.133225       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:07:11.135747       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:07:11.136864       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:07:11.138204       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0813 21:07:12.975783       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0813 21:07:13.076161       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:01:11 UTC, end at Fri 2021-08-13 21:09:09 UTC. --
	Aug 13 21:07:50 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:07:50.729894    5849 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:07:50 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:07:50.729930    5849 pod_workers.go:190] Error syncing pod 7b5aec25-fc7a-11eb-b132-525400ed6e80 ("metrics-server-8546d8b77b-mm6vs_kube-system(7b5aec25-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 13 21:07:51 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:07:51.030084    5849 pod_workers.go:190] Error syncing pod 7ba32335-fc7a-11eb-b132-525400ed6e80 ("dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"
	Aug 13 21:08:02 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:02.718025    5849 pod_workers.go:190] Error syncing pod 7b5aec25-fc7a-11eb-b132-525400ed6e80 ("metrics-server-8546d8b77b-mm6vs_kube-system(7b5aec25-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:08:03 old-k8s-version-20210813205823-30853 kubelet[5849]: W0813 21:08:03.286751    5849 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 21:08:04 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:04.497560    5849 pod_workers.go:190] Error syncing pod 7ba32335-fc7a-11eb-b132-525400ed6e80 ("dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"
	Aug 13 21:08:11 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:11.030497    5849 pod_workers.go:190] Error syncing pod 7ba32335-fc7a-11eb-b132-525400ed6e80 ("dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"
	Aug 13 21:08:13 old-k8s-version-20210813205823-30853 kubelet[5849]: W0813 21:08:13.328711    5849 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 21:08:16 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:16.745400    5849 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:08:16 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:16.745747    5849 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:08:16 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:16.746000    5849 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:08:16 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:16.746279    5849 pod_workers.go:190] Error syncing pod 7b5aec25-fc7a-11eb-b132-525400ed6e80 ("metrics-server-8546d8b77b-mm6vs_kube-system(7b5aec25-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 13 21:08:23 old-k8s-version-20210813205823-30853 kubelet[5849]: W0813 21:08:23.369212    5849 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 21:08:25 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:25.678468    5849 pod_workers.go:190] Error syncing pod 7ba32335-fc7a-11eb-b132-525400ed6e80 ("dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"
	Aug 13 21:08:29 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:29.717615    5849 pod_workers.go:190] Error syncing pod 7b5aec25-fc7a-11eb-b132-525400ed6e80 ("metrics-server-8546d8b77b-mm6vs_kube-system(7b5aec25-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:08:31 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:31.030197    5849 pod_workers.go:190] Error syncing pod 7ba32335-fc7a-11eb-b132-525400ed6e80 ("dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"
	Aug 13 21:08:42 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:42.717821    5849 pod_workers.go:190] Error syncing pod 7b5aec25-fc7a-11eb-b132-525400ed6e80 ("metrics-server-8546d8b77b-mm6vs_kube-system(7b5aec25-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:08:43 old-k8s-version-20210813205823-30853 kubelet[5849]: W0813 21:08:43.482580    5849 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 21:08:44 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:44.713849    5849 pod_workers.go:190] Error syncing pod 7ba32335-fc7a-11eb-b132-525400ed6e80 ("dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"
	Aug 13 21:08:53 old-k8s-version-20210813205823-30853 kubelet[5849]: W0813 21:08:53.537229    5849 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 21:08:53 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:53.715860    5849 pod_workers.go:190] Error syncing pod 7b5aec25-fc7a-11eb-b132-525400ed6e80 ("metrics-server-8546d8b77b-mm6vs_kube-system(7b5aec25-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:08:57 old-k8s-version-20210813205823-30853 kubelet[5849]: E0813 21:08:57.714068    5849 pod_workers.go:190] Error syncing pod 7ba32335-fc7a-11eb-b132-525400ed6e80 ("dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2vltn_kubernetes-dashboard(7ba32335-fc7a-11eb-b132-525400ed6e80)"
	Aug 13 21:09:03 old-k8s-version-20210813205823-30853 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:09:03 old-k8s-version-20210813205823-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:09:03 old-k8s-version-20210813205823-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [5b4555812b0f657c3f6847fef38a8be232441a35f9436270264bb24d832a57e6] <==
	* 2021/08/13 21:07:36 Using namespace: kubernetes-dashboard
	2021/08/13 21:07:36 Using in-cluster config to connect to apiserver
	2021/08/13 21:07:36 Using secret token for csrf signing
	2021/08/13 21:07:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:07:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:07:36 Successful initial request to the apiserver, version: v1.14.0
	2021/08/13 21:07:36 Generating JWE encryption key
	2021/08/13 21:07:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:07:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:07:36 Initializing JWE encryption key from synchronized object
	2021/08/13 21:07:36 Creating in-cluster Sidecar client
	2021/08/13 21:07:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:07:36 Serving insecurely on HTTP port: 9090
	2021/08/13 21:08:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:08:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:09:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:07:36 Starting overwatch
	
	* 
	* ==> storage-provisioner [0fdc2c1dd8463730f24da01e7b0766e9aa23e134eb287d3e02cdabf0519a4fe6] <==
	* I0813 21:07:35.312584       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 21:07:35.346422       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 21:07:35.347336       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 21:07:35.368323       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 21:07:35.369471       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813205823-30853_51d1d3a6-95e9-48b0-92aa-548fed77c2e1!
	I0813 21:07:35.380231       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7a516431-fc7a-11eb-b132-525400ed6e80", APIVersion:"v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210813205823-30853_51d1d3a6-95e9-48b0-92aa-548fed77c2e1 became leader
	I0813 21:07:35.471401       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813205823-30853_51d1d3a6-95e9-48b0-92aa-548fed77c2e1!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813205823-30853 -n old-k8s-version-20210813205823-30853
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813205823-30853 -n old-k8s-version-20210813205823-30853: exit status 2 (470.136807ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20210813205823-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-8546d8b77b-mm6vs
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210813205823-30853 describe pod metrics-server-8546d8b77b-mm6vs
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210813205823-30853 describe pod metrics-server-8546d8b77b-mm6vs: exit status 1 (72.086713ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-mm6vs" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20210813205823-30853 describe pod metrics-server-8546d8b77b-mm6vs: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813210102-30853 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813210102-30853 --alsologtostderr -v=1: exit status 80 (1.76583674s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-different-port-20210813210102-30853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 21:10:36.548943   13310 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:10:36.549076   13310 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:10:36.549089   13310 out.go:311] Setting ErrFile to fd 2...
	I0813 21:10:36.549094   13310 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:10:36.549239   13310 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:10:36.549456   13310 out.go:305] Setting JSON to false
	I0813 21:10:36.549486   13310 mustload.go:65] Loading cluster: default-k8s-different-port-20210813210102-30853
	I0813 21:10:36.549945   13310 config.go:177] Loaded profile config "default-k8s-different-port-20210813210102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:10:36.550503   13310 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:36.550563   13310 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:36.562364   13310 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0813 21:10:36.562834   13310 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:36.563427   13310 main.go:130] libmachine: Using API Version  1
	I0813 21:10:36.563455   13310 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:36.563806   13310 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:36.563985   13310 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:36.567553   13310 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:36.567866   13310 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:36.567907   13310 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:36.579692   13310 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36285
	I0813 21:10:36.580142   13310 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:36.580624   13310 main.go:130] libmachine: Using API Version  1
	I0813 21:10:36.580649   13310 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:36.581078   13310 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:36.581269   13310 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:36.582031   13310 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-different-port-20210813210102-30853 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 21:10:36.584607   13310 out.go:177] * Pausing node default-k8s-different-port-20210813210102-30853 ... 
	I0813 21:10:36.584632   13310 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:36.584935   13310 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:36.584979   13310 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:36.596822   13310 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42261
	I0813 21:10:36.597259   13310 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:36.597778   13310 main.go:130] libmachine: Using API Version  1
	I0813 21:10:36.597801   13310 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:36.598244   13310 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:36.598449   13310 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:36.598662   13310 ssh_runner.go:149] Run: systemctl --version
	I0813 21:10:36.598706   13310 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:36.604720   13310 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:36.605130   13310 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:36.605181   13310 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:36.605258   13310 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:36.605435   13310 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:36.605570   13310 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:36.605700   13310 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:36.708472   13310 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:36.720464   13310 pause.go:50] kubelet running: true
	I0813 21:10:36.720535   13310 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:10:37.033589   13310 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:10:37.033688   13310 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:10:37.198905   13310 cri.go:76] found id: "da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094"
	I0813 21:10:37.198941   13310 cri.go:76] found id: "99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835"
	I0813 21:10:37.198948   13310 cri.go:76] found id: "3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b"
	I0813 21:10:37.198954   13310 cri.go:76] found id: "cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db"
	I0813 21:10:37.198959   13310 cri.go:76] found id: "426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e"
	I0813 21:10:37.198965   13310 cri.go:76] found id: "9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810"
	I0813 21:10:37.198970   13310 cri.go:76] found id: "e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da"
	I0813 21:10:37.198975   13310 cri.go:76] found id: "78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f"
	I0813 21:10:37.198981   13310 cri.go:76] found id: "b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b"
	I0813 21:10:37.198990   13310 cri.go:76] found id: ""
	I0813 21:10:37.199046   13310 ssh_runner.go:149] Run: sudo runc list -f json

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813210102-30853 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813210102-30853 -n default-k8s-different-port-20210813210102-30853
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813210102-30853 -n default-k8s-different-port-20210813210102-30853: exit status 2 (285.31742ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20210813210102-30853 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20210813210102-30853 logs -n 25: (1.376556176s)
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:17 UTC | Fri, 13 Aug 2021 21:01:05 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:18 UTC | Fri, 13 Aug 2021 21:01:19 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:19 UTC | Fri, 13 Aug 2021 21:01:23 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:23 UTC | Fri, 13 Aug 2021 21:01:23 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:15 UTC | Fri, 13 Aug 2021 21:02:15 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:27 UTC | Fri, 13 Aug 2021 21:02:28 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:02 UTC | Fri, 13 Aug 2021 21:03:15 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:26 UTC | Fri, 13 Aug 2021 21:03:27 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:27 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:28 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:32 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:23 UTC | Fri, 13 Aug 2021 21:08:32 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:08:42 UTC | Fri, 13 Aug 2021 21:08:43 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:00 UTC | Fri, 13 Aug 2021 21:08:52 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                         |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:02 UTC | Fri, 13 Aug 2021 21:09:02 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205823-30853                       | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:05 UTC | Fri, 13 Aug 2021 21:09:06 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205823-30853                       | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:07 UTC | Fri, 13 Aug 2021 21:09:09 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:09 UTC | Fri, 13 Aug 2021 21:09:10 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:09:10 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:09:11 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:11 UTC | Fri, 13 Aug 2021 21:09:11 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:10:25 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:36 UTC | Fri, 13 Aug 2021 21:10:36 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813210910-30853 --memory=2200           | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:10:38 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:09:10
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:09:10.673379   12791 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:09:10.673452   12791 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:09:10.673457   12791 out.go:311] Setting ErrFile to fd 2...
	I0813 21:09:10.673460   12791 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:09:10.673589   12791 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:09:10.673842   12791 out.go:305] Setting JSON to false
	I0813 21:09:10.710967   12791 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":10313,"bootTime":1628878638,"procs":196,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:09:10.711108   12791 start.go:121] virtualization: kvm guest
	I0813 21:09:10.714392   12791 out.go:177] * [newest-cni-20210813210910-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:09:10.716013   12791 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:09:10.714549   12791 notify.go:169] Checking for updates...
	I0813 21:09:10.717634   12791 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:09:10.719077   12791 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:09:10.720797   12791 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:09:10.721401   12791 config.go:177] Loaded profile config "default-k8s-different-port-20210813210102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:09:10.721555   12791 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:09:10.721780   12791 config.go:177] Loaded profile config "old-k8s-version-20210813205823-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 21:09:10.721849   12791 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:09:10.756752   12791 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 21:09:10.756780   12791 start.go:278] selected driver: kvm2
	I0813 21:09:10.756787   12791 start.go:751] validating driver "kvm2" against <nil>
	I0813 21:09:10.756803   12791 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:09:10.758053   12791 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:09:10.758234   12791 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:09:10.769742   12791 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:09:10.769793   12791 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	W0813 21:09:10.769818   12791 out.go:242] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0813 21:09:10.769965   12791 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 21:09:10.769992   12791 cni.go:93] Creating CNI manager for ""
	I0813 21:09:10.769999   12791 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:09:10.770006   12791 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 21:09:10.770016   12791 start_flags.go:277] config:
	{Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:09:10.770113   12791 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:09:10.772194   12791 out.go:177] * Starting control plane node newest-cni-20210813210910-30853 in cluster newest-cni-20210813210910-30853
	I0813 21:09:10.772225   12791 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:09:10.772278   12791 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 21:09:10.772313   12791 cache.go:56] Caching tarball of preloaded images
	I0813 21:09:10.772443   12791 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 21:09:10.772466   12791 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0813 21:09:10.772616   12791 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json ...
	I0813 21:09:10.772647   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json: {Name:mka76415e48e0242b5a1559d0d7199fac2bfb5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:10.772840   12791 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:09:10.772878   12791 start.go:313] acquiring machines lock for newest-cni-20210813210910-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:09:10.772950   12791 start.go:317] acquired machines lock for "newest-cni-20210813210910-30853" in 46.661µs
	I0813 21:09:10.772977   12791 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVers
ion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:09:10.773061   12791 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 21:09:07.914518   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:08.406958   11447 pod_ready.go:81] duration metric: took 4m0.40016385s waiting for pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace to be "Ready" ...
	E0813 21:09:08.406984   11447 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:09:08.407011   11447 pod_ready.go:38] duration metric: took 4m38.843620331s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:09:08.407047   11447 kubeadm.go:604] restartCluster took 5m2.813329014s
	W0813 21:09:08.407209   11447 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:09:08.407246   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:09:07.902231   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.401905   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.775162   12791 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 21:09:10.775296   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:09:10.775358   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:09:10.786479   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0813 21:09:10.786930   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:09:10.787562   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:09:10.787587   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:09:10.788015   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:09:10.788228   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:10.788398   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:10.788591   12791 start.go:160] libmachine.API.Create for "newest-cni-20210813210910-30853" (driver="kvm2")
	I0813 21:09:10.788640   12791 client.go:168] LocalClient.Create starting
	I0813 21:09:10.788684   12791 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 21:09:10.788746   12791 main.go:130] libmachine: Decoding PEM data...
	I0813 21:09:10.788770   12791 main.go:130] libmachine: Parsing certificate...
	I0813 21:09:10.788912   12791 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 21:09:10.788937   12791 main.go:130] libmachine: Decoding PEM data...
	I0813 21:09:10.788956   12791 main.go:130] libmachine: Parsing certificate...
	I0813 21:09:10.789012   12791 main.go:130] libmachine: Running pre-create checks...
	I0813 21:09:10.789029   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .PreCreateCheck
	I0813 21:09:10.789351   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetConfigRaw
	I0813 21:09:10.789790   12791 main.go:130] libmachine: Creating machine...
	I0813 21:09:10.789804   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Create
	I0813 21:09:10.789932   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Creating KVM machine...
	I0813 21:09:10.792752   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found existing default KVM network
	I0813 21:09:10.794412   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:10.794251   12815 network.go:288] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc000010800] misses:0}
	I0813 21:09:10.794453   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:10.794342   12815 network.go:235] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 21:09:10.817502   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | trying to create private KVM network mk-newest-cni-20210813210910-30853 192.168.39.0/24...
	I0813 21:09:11.103452   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | private KVM network mk-newest-cni-20210813210910-30853 192.168.39.0/24 created
	I0813 21:09:11.103485   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.103368   12815 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:09:11.103509   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853 ...
	I0813 21:09:11.103562   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 21:09:11.103608   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 21:09:11.320966   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.320858   12815 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa...
	I0813 21:09:11.459093   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.458976   12815 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/newest-cni-20210813210910-30853.rawdisk...
	I0813 21:09:11.459148   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Writing magic tar header
	I0813 21:09:11.459177   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Writing SSH key tar header
	I0813 21:09:11.459194   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.459075   12815 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853 ...
	I0813 21:09:11.459223   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853 (perms=drwx------)
	I0813 21:09:11.459288   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853
	I0813 21:09:11.459321   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 21:09:11.459350   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 21:09:11.459373   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 21:09:11.459391   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 21:09:11.459409   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:09:11.459426   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337
	I0813 21:09:11.459444   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 21:09:11.459464   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins
	I0813 21:09:11.459485   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 21:09:11.459500   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home
	I0813 21:09:11.459515   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 21:09:11.459528   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Skipping /home - not owner
	I0813 21:09:11.459546   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Creating domain...
	I0813 21:09:11.488427   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:ee:fb:7e in network default
	I0813 21:09:11.489099   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring networks are active...
	I0813 21:09:11.489140   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:11.491476   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring network default is active
	I0813 21:09:11.491829   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring network mk-newest-cni-20210813210910-30853 is active
	I0813 21:09:11.492457   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Getting domain xml...
	I0813 21:09:11.494775   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Creating domain...
	I0813 21:09:11.955786   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Waiting to get IP...
	I0813 21:09:11.956670   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:11.957315   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:11.957341   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.957262   12815 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 21:09:12.221730   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.222307   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.222349   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:12.222212   12815 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 21:09:12.604662   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.605164   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.605191   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:12.605108   12815 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 21:09:13.029701   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.030156   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.030218   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:13.030122   12815 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 21:09:13.504659   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.505143   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.505173   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:13.505105   12815 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 21:09:14.093824   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.094412   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.094446   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:14.094345   12815 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 21:09:14.929917   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.930509   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.930535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:14.930469   12815 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 21:09:12.902877   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:14.903637   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:15.678952   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:15.679492   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:15.679571   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:15.679462   12815 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 21:09:16.668007   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:16.668572   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:16.668609   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:16.668495   12815 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 21:09:17.859819   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:17.860363   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:17.860390   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:17.860285   12815 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 21:09:19.539855   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:19.540503   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:19.540530   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:19.540442   12815 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 21:09:17.403580   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:19.901370   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:21.902145   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:21.887601   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:21.888130   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:21.888151   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:21.888074   12815 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 21:09:25.255905   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.256490   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Found IP for machine: 192.168.39.210
	I0813 21:09:25.256524   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has current primary IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.256535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Reserving static IP address...
	I0813 21:09:25.256915   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find host DHCP lease matching {name: "newest-cni-20210813210910-30853", mac: "52:54:00:22:60:9f", ip: "192.168.39.210"} in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.303282   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Reserved static IP address: 192.168.39.210
	I0813 21:09:25.303341   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Getting to WaitForSSH function...
	I0813 21:09:25.303352   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Waiting for SSH to be available...
	I0813 21:09:25.309055   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.309442   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:minikube Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.309474   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.309627   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using SSH client type: external
	I0813 21:09:25.309651   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa (-rw-------)
	I0813 21:09:25.309698   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:09:25.309731   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | About to run SSH command:
	I0813 21:09:25.309744   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | exit 0
	I0813 21:09:25.467104   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 21:09:25.467603   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) KVM machine creation complete!
	I0813 21:09:25.467679   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetConfigRaw
	I0813 21:09:25.468310   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:25.468513   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:25.468691   12791 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 21:09:25.468710   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:09:25.471536   12791 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 21:09:25.471555   12791 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 21:09:25.471565   12791 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 21:09:25.471575   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.476123   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.476450   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.476479   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.476604   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.476755   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.476933   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.477105   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.477284   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.477466   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.477480   12791 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 21:09:25.594161   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:09:25.594190   12791 main.go:130] libmachine: Detecting the provisioner...
	I0813 21:09:25.594203   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.600130   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.600531   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.600564   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.600765   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.600974   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.601151   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.601303   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.601456   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.601620   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.601635   12791 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 21:09:22.392237   11600 pod_ready.go:81] duration metric: took 4m0.007094721s waiting for pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace to be "Ready" ...
	E0813 21:09:22.392261   11600 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:09:22.392283   11600 pod_ready.go:38] duration metric: took 4m14.135839126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:09:22.392312   11600 kubeadm.go:604] restartCluster took 4m52.280117973s
	W0813 21:09:22.392448   11600 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:09:22.392485   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:09:25.715874   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 21:09:25.715991   12791 main.go:130] libmachine: found compatible host: buildroot
	I0813 21:09:25.716007   12791 main.go:130] libmachine: Provisioning with buildroot...
	I0813 21:09:25.716023   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:25.716285   12791 buildroot.go:166] provisioning hostname "newest-cni-20210813210910-30853"
	I0813 21:09:25.716311   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:25.716475   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.722141   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.722535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.722575   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.722814   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.723002   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.723169   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.723323   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.723458   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.723611   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.723626   12791 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210813210910-30853 && echo "newest-cni-20210813210910-30853" | sudo tee /etc/hostname
	I0813 21:09:25.855120   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210813210910-30853
	
	I0813 21:09:25.855151   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.861182   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.861544   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.861567   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.861715   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.861922   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.862087   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.862214   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.862344   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.862548   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.862577   12791 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210813210910-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210813210910-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210813210910-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:09:25.982023   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:09:25.982082   12791 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:09:25.982118   12791 buildroot.go:174] setting up certificates
	I0813 21:09:25.982134   12791 provision.go:83] configureAuth start
	I0813 21:09:25.982150   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:25.982399   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:25.988009   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.988348   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.988380   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.988535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.993579   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.993994   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.994024   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.994249   12791 provision.go:138] copyHostCerts
	I0813 21:09:25.994336   12791 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:09:25.994347   12791 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:09:25.994396   12791 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:09:25.994483   12791 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:09:25.994497   12791 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:09:25.994532   12791 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:09:25.994643   12791 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:09:25.994656   12791 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:09:25.994688   12791 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 21:09:25.994760   12791 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210813210910-30853 san=[192.168.39.210 192.168.39.210 localhost 127.0.0.1 minikube newest-cni-20210813210910-30853]
	I0813 21:09:26.305745   12791 provision.go:172] copyRemoteCerts
	I0813 21:09:26.305810   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:09:26.305840   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:26.311502   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.311880   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:26.311916   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.312018   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:26.312266   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:26.312474   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:26.312635   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:26.397917   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:09:26.415261   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 21:09:26.432018   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:09:26.448392   12791 provision.go:86] duration metric: configureAuth took 466.244488ms
	I0813 21:09:26.448413   12791 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:09:26.448550   12791 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:09:26.448647   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:26.453886   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.454235   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:26.454267   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.454404   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:26.454578   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:26.454719   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:26.454882   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:26.455020   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:26.455171   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:26.455193   12791 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 21:09:27.218253   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 21:09:27.218291   12791 main.go:130] libmachine: Checking connection to Docker...
	I0813 21:09:27.218304   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetURL
	I0813 21:09:27.220942   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using libvirt version 3000000
	I0813 21:09:27.225565   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.225908   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.225955   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.226230   12791 main.go:130] libmachine: Docker is up and running!
	I0813 21:09:27.226255   12791 main.go:130] libmachine: Reticulating splines...
	I0813 21:09:27.226262   12791 client.go:171] LocalClient.Create took 16.437611332s
	I0813 21:09:27.226308   12791 start.go:168] duration metric: libmachine.API.Create for "newest-cni-20210813210910-30853" took 16.437720973s
	I0813 21:09:27.226319   12791 start.go:267] post-start starting for "newest-cni-20210813210910-30853" (driver="kvm2")
	I0813 21:09:27.226323   12791 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:09:27.226339   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.226579   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:09:27.226605   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:27.231167   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.231514   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.231541   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.231723   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:27.231888   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:27.232115   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:27.232258   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:27.318810   12791 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:09:27.324679   12791 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:09:27.324708   12791 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:09:27.324766   12791 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:09:27.324867   12791 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 21:09:27.324993   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:09:27.332665   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:09:27.349495   12791 start.go:270] post-start completed in 123.164223ms
	I0813 21:09:27.349583   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetConfigRaw
	I0813 21:09:27.350235   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:27.356173   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.356503   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.356569   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.356804   12791 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json ...
	I0813 21:09:27.357034   12791 start.go:129] duration metric: createHost completed in 16.583958717s
	I0813 21:09:27.357054   12791 start.go:80] releasing machines lock for "newest-cni-20210813210910-30853", held for 16.584089955s
	I0813 21:09:27.357097   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.357282   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:27.361779   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.362087   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.362122   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.362275   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.362445   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.362924   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.363133   12791 ssh_runner.go:149] Run: systemctl --version
	I0813 21:09:27.363160   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:27.363219   12791 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:09:27.363264   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:27.368253   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.368519   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.368556   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.368628   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:27.368784   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:27.368919   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:27.369055   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:27.369149   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.369521   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.369556   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.369717   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:27.369863   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:27.369979   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:27.370099   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:27.452425   12791 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:09:27.452543   12791 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:09:31.448706   12791 ssh_runner.go:189] Completed: sudo crictl images --output json: (3.996135455s)
	I0813 21:09:31.448838   12791 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 21:09:31.448901   12791 ssh_runner.go:149] Run: which lz4
	I0813 21:09:31.453326   12791 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:09:31.458022   12791 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:09:31.458058   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (590981257 bytes)
	I0813 21:09:34.040840   12791 crio.go:362] Took 2.587545 seconds to copy over tarball
	I0813 21:09:34.040960   12791 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:09:39.662568   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.255292287s)
	I0813 21:09:39.662654   11447 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:09:39.679831   11447 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:09:39.679928   11447 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:09:39.725756   11447 cri.go:76] found id: ""
	I0813 21:09:39.725838   11447 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:09:39.734367   11447 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:09:39.743419   11447 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:09:39.743465   11447 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:09:39.046178   12791 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.005181631s)
	I0813 21:09:39.046212   12791 crio.go:369] Took 5.005343 seconds t extract the tarball
	I0813 21:09:39.046225   12791 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:09:39.096327   12791 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 21:09:39.108664   12791 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 21:09:39.120896   12791 docker.go:153] disabling docker service ...
	I0813 21:09:39.120956   12791 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:09:39.132781   12791 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:09:39.144772   12791 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:09:39.291366   12791 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:09:39.473805   12791 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:09:39.488990   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:09:39.508851   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 21:09:39.519787   12791 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:09:39.527766   12791 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:09:39.527827   12791 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:09:39.549292   12791 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:09:39.557653   12791 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:09:39.695889   12791 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 21:09:39.852538   12791 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 21:09:39.852673   12791 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 21:09:39.865143   12791 start.go:413] Will wait 60s for crictl version
	I0813 21:09:39.865219   12791 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:09:39.902891   12791 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 21:09:39.902976   12791 ssh_runner.go:149] Run: crio --version
	I0813 21:09:40.146285   12791 ssh_runner.go:149] Run: crio --version
	I0813 21:09:44.881949   11447 out.go:204]   - Generating certificates and keys ...
	I0813 21:09:44.881970   12791 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.2 ...
	I0813 21:09:44.882025   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:44.888023   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:44.888330   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:44.888361   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:44.888544   12791 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 21:09:44.893252   12791 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:09:44.903812   12791 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/client.crt
	I0813 21:09:44.903997   12791 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/client.key
	I0813 21:09:44.922443   12791 out.go:177]   - kubelet.network-plugin=cni
	I0813 21:09:44.923908   12791 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0813 21:09:44.923979   12791 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:09:44.924054   12791 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:09:45.004762   12791 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:09:45.004791   12791 crio.go:333] Images already preloaded, skipping extraction
	I0813 21:09:45.004856   12791 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:09:45.042121   12791 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:09:45.042150   12791 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:09:45.042226   12791 ssh_runner.go:149] Run: crio config
	I0813 21:09:45.253009   12791 cni.go:93] Creating CNI manager for ""
	I0813 21:09:45.253045   12791 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:09:45.253059   12791 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0813 21:09:45.253078   12791 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.39.210 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210813210910-30853 NodeName:newest-cni-20210813210910-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-el
ect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.210 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:09:45.253242   12791 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "newest-cni-20210813210910-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:09:45.253382   12791 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210813210910-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.210 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:09:45.253451   12791 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 21:09:45.260928   12791 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:09:45.260983   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:09:45.268144   12791 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (554 bytes)
	I0813 21:09:45.280833   12791 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 21:09:45.293352   12791 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I0813 21:09:45.306281   12791 ssh_runner.go:149] Run: grep 192.168.39.210	control-plane.minikube.internal$ /etc/hosts
	I0813 21:09:45.310235   12791 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:09:45.322126   12791 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853 for IP: 192.168.39.210
	I0813 21:09:45.322191   12791 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:09:45.322212   12791 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:09:45.322281   12791 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/client.key
	I0813 21:09:45.322307   12791 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a
	I0813 21:09:45.322319   12791 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a with IP's: [192.168.39.210 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 21:09:45.521630   12791 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a ...
	I0813 21:09:45.521662   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a: {Name:mk4aa4db18dba264c364eea6455fafca6541c687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.521857   12791 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a ...
	I0813 21:09:45.521869   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a: {Name:mk4bafabda5b550064b81d0be7e6d613e7cbe853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.521953   12791 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt
	I0813 21:09:45.522012   12791 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key
	I0813 21:09:45.522063   12791 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key
	I0813 21:09:45.522071   12791 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt with IP's: []
	I0813 21:09:45.572044   12791 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt ...
	I0813 21:09:45.572072   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt: {Name:mk46480092ca0ddfdbb22ced231c8543e6fadff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.572258   12791 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key ...
	I0813 21:09:45.572270   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key: {Name:mk2ff838c1ce904cf05995003085f2c953d17b54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.572443   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 21:09:45.572486   12791 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 21:09:45.572497   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:09:45.572520   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:09:45.572550   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:09:45.572575   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 21:09:45.572620   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:09:45.573530   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:09:45.591406   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:09:45.607675   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:09:45.623382   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 21:09:44.885025   11447 out.go:204]   - Booting up control plane ...
	I0813 21:09:45.638600   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:09:45.655496   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 21:09:45.672748   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:09:45.690934   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 21:09:45.709394   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:09:45.727886   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 21:09:45.747118   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 21:09:45.764623   12791 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:09:45.776487   12791 ssh_runner.go:149] Run: openssl version
	I0813 21:09:45.782506   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:09:45.790602   12791 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:09:45.795798   12791 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:09:45.795845   12791 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:09:45.801633   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:09:45.809459   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 21:09:45.817086   12791 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 21:09:45.821525   12791 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 21:09:45.821581   12791 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 21:09:45.827427   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 21:09:45.835137   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 21:09:45.843222   12791 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 21:09:45.848030   12791 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 21:09:45.848070   12791 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 21:09:45.854871   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:09:45.863382   12791 kubeadm.go:390] StartCluster: {Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.
0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:09:45.863483   12791 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 21:09:45.863550   12791 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:09:45.897179   12791 cri.go:76] found id: ""
	I0813 21:09:45.897265   12791 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:09:45.904791   12791 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:09:45.911599   12791 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:09:45.918334   12791 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:09:45.918383   12791 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:09:57.982116   11447 out.go:204]   - Configuring RBAC rules ...
	I0813 21:09:58.584325   11447 cni.go:93] Creating CNI manager for ""
	I0813 21:09:58.584349   11447 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:10:00.460094   12791 out.go:204]   - Generating certificates and keys ...
	I0813 21:09:58.586084   11447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:09:58.586145   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:09:58.603522   11447 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:09:58.627002   11447 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:09:58.627101   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:09:58.627103   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=default-k8s-different-port-20210813210102-30853 minikube.k8s.io/updated_at=2021_08_13T21_09_58_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:09:59.050930   11447 ops.go:34] apiserver oom_adj: -16
	I0813 21:09:59.051059   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:09:59.695711   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:00.195937   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:00.695450   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:03.003248   12791 out.go:204]   - Booting up control plane ...
	I0813 21:10:01.195565   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:01.695971   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:02.195512   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:02.696069   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:03.195960   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:03.696007   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:04.195636   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:04.695628   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:05.195701   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:05.695999   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.044352   11600 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (46.651842681s)
	I0813 21:10:09.044429   11600 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:10:09.059478   11600 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:10:09.059553   11600 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:10:09.093284   11600 cri.go:76] found id: ""
	I0813 21:10:09.093381   11600 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:10:09.100568   11600 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:10:09.107226   11600 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:10:09.107269   11600 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:10:06.195800   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:06.695240   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:07.195746   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:07.695213   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:08.195912   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:08.695965   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.195595   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.696049   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:10.195131   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:10.695293   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.730908   11600 out.go:204]   - Generating certificates and keys ...
	I0813 21:10:11.196059   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:11.534135   11447 kubeadm.go:985] duration metric: took 12.907094032s to wait for elevateKubeSystemPrivileges.
	I0813 21:10:11.534170   11447 kubeadm.go:392] StartCluster complete in 6m5.98958255s
	I0813 21:10:11.534191   11447 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:11.534316   11447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:10:11.535601   11447 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:12.110091   11447 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210813210102-30853" rescaled to 1
	I0813 21:10:12.110179   11447 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.136 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:10:12.112084   11447 out.go:177] * Verifying Kubernetes components...
	I0813 21:10:12.110253   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:10:12.112158   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:12.110569   11447 config.go:177] Loaded profile config "default-k8s-different-port-20210813210102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:10:12.110623   11447 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:10:12.112334   11447 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112337   11447 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112351   11447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112358   11447 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.112366   11447 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:10:12.112400   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.112736   11447 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112752   11447 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.112760   11447 addons.go:147] addon metrics-server should already be in state true
	I0813 21:10:12.112763   11447 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112774   11447 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.112783   11447 addons.go:147] addon dashboard should already be in state true
	I0813 21:10:12.112784   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.112802   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.112857   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.112894   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.112750   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.113192   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.113201   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.113224   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.113233   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.113340   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.140644   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41549
	I0813 21:10:12.140642   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35329
	I0813 21:10:12.140661   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0813 21:10:12.141348   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.141465   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.141541   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.141935   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.141953   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.142074   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.142081   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.142089   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.142093   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.142438   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.142486   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.143136   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.143176   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.143388   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.143929   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.143972   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.144251   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.144301   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0813 21:10:12.144729   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.145337   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.145357   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.145698   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.146348   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.146380   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.161135   11447 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.161159   11447 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:10:12.161188   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.161594   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.161636   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.161853   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34203
	I0813 21:10:12.161878   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43143
	I0813 21:10:12.162218   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.162412   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.162720   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.162740   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.162900   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.162921   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.163146   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.163294   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.166669   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.169181   11447 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:10:12.169252   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:10:12.169267   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:10:12.167214   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.169288   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.169571   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.173910   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.175978   11447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:10:12.176070   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0813 21:10:12.176093   11447 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:12.176103   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:10:12.176120   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.175639   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.176186   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.176216   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.175916   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39465
	I0813 21:10:12.176232   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:12.176420   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.176469   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.176549   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.176672   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.176869   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.177027   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.177041   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.177293   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.177308   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.177366   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.177663   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.177782   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.178349   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.178391   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.181885   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.183919   11447 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:10:12.182804   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.183976   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.184012   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.183416   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:10.812498   11600 out.go:204]   - Booting up control plane ...
	I0813 21:10:12.186349   11447 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:10:12.186413   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:10:12.184193   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.186427   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:10:12.186446   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.186621   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.186808   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.190702   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35615
	I0813 21:10:12.191063   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.191556   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.191584   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.191977   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.192165   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.192357   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.192757   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.192786   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.192929   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:12.193084   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.193242   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.193363   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.195129   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.195341   11447 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:12.195358   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:10:12.195378   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.200908   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.201282   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.201309   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.201443   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:12.201571   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.201711   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.201825   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.425248   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:12.468978   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:10:12.469021   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:10:12.494701   11447 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210813210102-30853" to be "Ready" ...
	I0813 21:10:12.495206   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:10:12.499329   11447 node_ready.go:49] node "default-k8s-different-port-20210813210102-30853" has status "Ready":"True"
	I0813 21:10:12.499359   11447 node_ready.go:38] duration metric: took 4.621451ms waiting for node "default-k8s-different-port-20210813210102-30853" to be "Ready" ...
	I0813 21:10:12.499373   11447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:12.499757   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:12.510602   11447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:12.610525   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:10:12.610562   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:10:12.656245   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:10:12.656276   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:10:12.772157   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:10:12.772191   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:10:12.815178   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:10:12.815208   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:10:12.932243   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:10:12.932272   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:10:12.992201   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:10:13.151328   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:10:13.151358   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:10:13.272742   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:10:13.272771   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:10:13.504799   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:10:13.504829   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:10:13.711447   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:10:13.711476   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:10:13.833690   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:10:13.833722   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:10:13.907807   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:10:13.907839   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:10:14.189833   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:10:14.535190   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:15.411080   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.985781369s)
	I0813 21:10:15.411145   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.411139   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.91134851s)
	I0813 21:10:15.411163   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.411180   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.411211   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.411243   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.916004514s)
	I0813 21:10:15.411301   11447 start.go:728] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS
	I0813 21:10:15.412648   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.412658   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.412711   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.412721   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.412731   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.412738   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.412765   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.412779   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.412797   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.412740   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.413131   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.413156   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.413170   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.413203   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.413207   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.413222   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.413245   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.413261   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.413535   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.413550   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:16.138255   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.145991542s)
	I0813 21:10:16.138325   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:16.138339   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:16.138639   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:16.138660   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:16.138663   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:16.138692   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:16.138702   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:16.138996   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:16.139040   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:16.139056   11447 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:16.138998   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:16.609336   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:17.060932   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.871038717s)
	I0813 21:10:17.061005   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:17.061023   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:17.061327   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:17.061348   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:17.061358   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:17.061349   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:17.061370   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:17.061708   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:17.061715   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:17.061777   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:17.064437   11447 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 21:10:17.064471   11447 addons.go:344] enableAddons completed in 4.953854482s
	I0813 21:10:19.033855   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:21.685414   12791 out.go:204]   - Configuring RBAC rules ...
	I0813 21:10:22.697730   12791 cni.go:93] Creating CNI manager for ""
	I0813 21:10:22.697758   12791 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:10:22.699669   12791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:10:22.699748   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:10:22.711081   12791 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:10:22.740715   12791 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:10:22.740845   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:22.740928   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=newest-cni-20210813210910-30853 minikube.k8s.io/updated_at=2021_08_13T21_10_22_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:23.063141   12791 ops.go:34] apiserver oom_adj: -16
	I0813 21:10:23.063228   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:23.680146   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:24.179617   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:24.680324   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:25.180108   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:21.530978   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:22.032299   11447 pod_ready.go:92] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:22.032329   11447 pod_ready.go:81] duration metric: took 9.521694058s waiting for pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:22.032343   11447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.052078   11447 pod_ready.go:102] pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:24.548192   11447 pod_ready.go:97] error getting pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-xmqhp" not found
	I0813 21:10:24.548233   11447 pod_ready.go:81] duration metric: took 2.515881289s waiting for pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace to be "Ready" ...
	E0813 21:10:24.548247   11447 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-xmqhp" not found
	I0813 21:10:24.548257   11447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.554129   11447 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.554154   11447 pod_ready.go:81] duration metric: took 5.887843ms waiting for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.554167   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.559840   11447 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.559859   11447 pod_ready.go:81] duration metric: took 5.68331ms waiting for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.559871   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.565198   11447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.565217   11447 pod_ready.go:81] duration metric: took 5.336694ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.565226   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jn56d" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.571811   11447 pod_ready.go:92] pod "kube-proxy-jn56d" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.571827   11447 pod_ready.go:81] duration metric: took 6.594619ms waiting for pod "kube-proxy-jn56d" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.571837   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.749142   11447 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.749167   11447 pod_ready.go:81] duration metric: took 177.31996ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.749179   11447 pod_ready.go:38] duration metric: took 12.249789309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:24.749199   11447 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:10:24.749257   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:10:24.784712   11447 api_server.go:70] duration metric: took 12.674498021s to wait for apiserver process to appear ...
	I0813 21:10:24.784740   11447 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:10:24.784753   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:10:24.793567   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 200:
	ok
	I0813 21:10:24.794892   11447 api_server.go:139] control plane version: v1.21.3
	I0813 21:10:24.794914   11447 api_server.go:129] duration metric: took 10.167822ms to wait for apiserver health ...
	I0813 21:10:24.794925   11447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:10:24.951664   11447 system_pods.go:59] 8 kube-system pods found
	I0813 21:10:24.951701   11447 system_pods.go:61] "coredns-558bd4d5db-jphw4" [057e9392-38dd-4c71-a09d-83ae9055347e] Running
	I0813 21:10:24.951709   11447 system_pods.go:61] "etcd-default-k8s-different-port-20210813210102-30853" [663c755b-7d29-4114-a1ff-2216c7e74716] Running
	I0813 21:10:24.951717   11447 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813210102-30853" [74f98aff-af48-4328-bee1-8f02162674db] Running
	I0813 21:10:24.951726   11447 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813210102-30853" [77d2d0a4-2421-4895-af76-15c395e6c465] Running
	I0813 21:10:24.951731   11447 system_pods.go:61] "kube-proxy-jn56d" [bf9beff3-8f15-4901-9886-ef5f0d821182] Running
	I0813 21:10:24.951736   11447 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813210102-30853" [21fdb84c-27b1-4592-8914-bf32c1b56ecf] Running
	I0813 21:10:24.951745   11447 system_pods.go:61] "metrics-server-7c784ccb57-cdhkk" [899ed30f-faf1-40e3-9a46-c1ad31aa7f70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:10:24.951753   11447 system_pods.go:61] "storage-provisioner" [3b577536-5550-42ee-a361-275f78e67c9e] Running
	I0813 21:10:24.951765   11447 system_pods.go:74] duration metric: took 156.833527ms to wait for pod list to return data ...
	I0813 21:10:24.951775   11447 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:10:25.148940   11447 default_sa.go:45] found service account: "default"
	I0813 21:10:25.148969   11447 default_sa.go:55] duration metric: took 197.176977ms for default service account to be created ...
	I0813 21:10:25.148984   11447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:10:25.352044   11447 system_pods.go:86] 8 kube-system pods found
	I0813 21:10:25.352084   11447 system_pods.go:89] "coredns-558bd4d5db-jphw4" [057e9392-38dd-4c71-a09d-83ae9055347e] Running
	I0813 21:10:25.352096   11447 system_pods.go:89] "etcd-default-k8s-different-port-20210813210102-30853" [663c755b-7d29-4114-a1ff-2216c7e74716] Running
	I0813 21:10:25.352103   11447 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210813210102-30853" [74f98aff-af48-4328-bee1-8f02162674db] Running
	I0813 21:10:25.352112   11447 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210813210102-30853" [77d2d0a4-2421-4895-af76-15c395e6c465] Running
	I0813 21:10:25.352119   11447 system_pods.go:89] "kube-proxy-jn56d" [bf9beff3-8f15-4901-9886-ef5f0d821182] Running
	I0813 21:10:25.352129   11447 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210813210102-30853" [21fdb84c-27b1-4592-8914-bf32c1b56ecf] Running
	I0813 21:10:25.352141   11447 system_pods.go:89] "metrics-server-7c784ccb57-cdhkk" [899ed30f-faf1-40e3-9a46-c1ad31aa7f70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:10:25.352150   11447 system_pods.go:89] "storage-provisioner" [3b577536-5550-42ee-a361-275f78e67c9e] Running
	I0813 21:10:25.352160   11447 system_pods.go:126] duration metric: took 203.170374ms to wait for k8s-apps to be running ...
	I0813 21:10:25.352177   11447 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:10:25.352232   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:25.366009   11447 system_svc.go:56] duration metric: took 13.82353ms WaitForService to wait for kubelet.
	I0813 21:10:25.366041   11447 kubeadm.go:547] duration metric: took 13.255833147s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:10:25.366078   11447 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:10:25.671992   11447 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:10:25.672026   11447 node_conditions.go:123] node cpu capacity is 2
	I0813 21:10:25.672045   11447 node_conditions.go:105] duration metric: took 305.961488ms to run NodePressure ...
	I0813 21:10:25.672058   11447 start.go:231] waiting for startup goroutines ...
	I0813 21:10:25.741468   11447 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 21:10:25.743555   11447 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210813210102-30853" cluster and "default" namespace by default
	I0813 21:10:29.004104   11600 out.go:204]   - Configuring RBAC rules ...
	I0813 21:10:29.713525   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:10:29.713570   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:10:25.680008   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:26.180477   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:26.680294   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:27.180411   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:27.679956   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:28.179559   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:28.679596   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.179509   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.679704   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:30.180325   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.715719   11600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:10:29.715784   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:10:29.736151   11600 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:10:29.781971   11600 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:10:29.782030   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.782090   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=no-preload-20210813205915-30853 minikube.k8s.io/updated_at=2021_08_13T21_10_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.830681   11600 ops.go:34] apiserver oom_adj: -16
	I0813 21:10:30.150647   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:30.779463   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.280355   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.779613   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:30.680059   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.180084   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.679975   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:32.179732   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:32.679873   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.179878   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.679567   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.180100   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.679513   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.825619   12791 kubeadm.go:985] duration metric: took 12.084819945s to wait for elevateKubeSystemPrivileges.
	I0813 21:10:34.825653   12791 kubeadm.go:392] StartCluster complete in 48.962278505s
	I0813 21:10:34.825676   12791 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:34.825790   12791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:10:34.827844   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:35.357758   12791 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210813210910-30853" rescaled to 1
	I0813 21:10:35.357830   12791 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:10:35.359667   12791 out.go:177] * Verifying Kubernetes components...
	I0813 21:10:35.357884   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:10:35.357927   12791 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 21:10:35.358131   12791 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:10:35.359798   12791 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210813210910-30853"
	I0813 21:10:35.359818   12791 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:35.359820   12791 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210813210910-30853"
	W0813 21:10:35.359828   12791 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:10:35.359855   12791 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:10:35.359852   12791 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210813210910-30853"
	I0813 21:10:35.359908   12791 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210813210910-30853"
	I0813 21:10:35.360333   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.360381   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.360414   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.360455   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.374986   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42203
	I0813 21:10:35.375050   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0813 21:10:35.375635   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.375910   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.377813   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.377836   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.377912   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.377925   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.378238   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.378810   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.378869   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.379811   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.380004   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:10:35.391384   12791 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210813210910-30853"
	W0813 21:10:35.391410   12791 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:10:35.391438   12791 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:10:35.391832   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.391897   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.391999   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0813 21:10:35.392393   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.392989   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.393014   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.393496   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.393691   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:10:35.397628   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:10:35.400074   12791 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:10:35.400221   12791 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:35.400233   12791 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:10:35.400253   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:10:35.406732   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39751
	I0813 21:10:35.407200   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.407553   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.407703   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.407724   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.408324   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:10:35.408333   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:10:35.408348   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.408363   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.408489   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:10:35.408643   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:10:35.408815   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:10:35.409189   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.409266   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.424756   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0813 21:10:35.425178   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.425688   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.425717   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.426032   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.426208   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:10:35.429530   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:10:35.429754   12791 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:35.429775   12791 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:10:35.429797   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:10:35.436000   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.436628   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:10:35.436664   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.436775   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:10:35.436942   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:10:35.437117   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:10:35.437291   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:10:35.594125   12791 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:32.279420   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:32.780066   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.280227   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.779756   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.280100   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.779428   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:35.279470   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:35.779478   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:36.279401   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:36.779390   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:35.796621   12791 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:36.020007   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:10:36.022097   12791 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:10:36.022141   12791 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:10:37.953285   12791 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.359113303s)
	I0813 21:10:37.953357   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:37.953374   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:37.953716   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:37.953737   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:37.953747   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:37.953764   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:37.954032   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:37.954047   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.018145   12791 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.221484906s)
	I0813 21:10:38.018195   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:38.018210   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:38.018146   12791 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.995992413s)
	I0813 21:10:38.018276   12791 api_server.go:70] duration metric: took 2.660410949s to wait for apiserver process to appear ...
	I0813 21:10:38.018284   12791 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:10:38.018293   12791 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:10:38.018510   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:38.018529   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.018538   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:38.018547   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:38.018806   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:38.018828   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.018842   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:38.018866   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:38.019228   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:10:38.019231   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:38.019253   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.021307   12791 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 21:10:38.021330   12791 addons.go:344] enableAddons completed in 2.663409626s
	I0813 21:10:38.037183   12791 api_server.go:265] https://192.168.39.210:8443/healthz returned 200:
	ok
	I0813 21:10:38.040155   12791 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:10:38.040215   12791 api_server.go:129] duration metric: took 21.924445ms to wait for apiserver health ...
	I0813 21:10:38.040228   12791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:10:38.072532   12791 system_pods.go:59] 8 kube-system pods found
	I0813 21:10:38.072583   12791 system_pods.go:61] "coredns-78fcd69978-42frp" [ffc12ff0-fe4e-422b-ae81-83f17416e379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:10:38.072594   12791 system_pods.go:61] "coredns-78fcd69978-bc587" [0d2dab50-994b-4314-8922-0e8a913a9b26] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:10:38.072605   12791 system_pods.go:61] "etcd-newest-cni-20210813210910-30853" [a6811fb7-a94c-45db-91d0-34c033aa1eab] Running
	I0813 21:10:38.072623   12791 system_pods.go:61] "kube-apiserver-newest-cni-20210813210910-30853" [bdcdda0b-8c06-4c71-8f0a-66d55d331267] Running
	I0813 21:10:38.072630   12791 system_pods.go:61] "kube-controller-manager-newest-cni-20210813210910-30853" [374fba93-8efe-439f-8aec-50ae02d227e3] Running
	I0813 21:10:38.072639   12791 system_pods.go:61] "kube-proxy-qt9ld" [4e36061f-0559-4cde-9b0a-b5cb328d0d76] Running
	I0813 21:10:38.072646   12791 system_pods.go:61] "kube-scheduler-newest-cni-20210813210910-30853" [bdf4950a-8d5e-434c-8c99-20e475c71f65] Running
	I0813 21:10:38.072656   12791 system_pods.go:61] "storage-provisioner" [5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 21:10:38.072667   12791 system_pods.go:74] duration metric: took 32.432184ms to wait for pod list to return data ...
	I0813 21:10:38.072681   12791 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:10:38.079488   12791 default_sa.go:45] found service account: "default"
	I0813 21:10:38.079509   12791 default_sa.go:55] duration metric: took 6.821814ms for default service account to be created ...
	I0813 21:10:38.079522   12791 kubeadm.go:547] duration metric: took 2.721660353s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0813 21:10:38.079544   12791 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:10:38.087838   12791 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.06779332s)
	I0813 21:10:38.087870   12791 start.go:728] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 21:10:38.089094   12791 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:10:38.089130   12791 node_conditions.go:123] node cpu capacity is 2
	I0813 21:10:38.089146   12791 node_conditions.go:105] duration metric: took 9.595836ms to run NodePressure ...
	I0813 21:10:38.089160   12791 start.go:231] waiting for startup goroutines ...
	I0813 21:10:38.151075   12791 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 21:10:38.152833   12791 out.go:177] 
	W0813 21:10:38.153012   12791 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 21:10:38.154648   12791 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:10:38.156287   12791 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813210910-30853" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 21:03:41 UTC, end at Fri 2021-08-13 21:10:39 UTC. --
	Aug 13 21:10:38 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:38.072772817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="go-grpc-middleware/chain.go:25" id=d81f992d-37d7-4556-ad8c-858487444913 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:38 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:38.829132726Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b275944e-7d80-481b-bec4-7d34c5293af0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:38 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:38.829292892Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b275944e-7d80-481b-bec4-7d34c5293af0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:38 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:38.829491136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b275944e-7d80-481b-bec4-7d34c5293af0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:38 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:38.878013312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e4da7053-a39d-422a-a7dd-97886bf543ea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:38 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:38.878239851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e4da7053-a39d-422a-a7dd-97886bf543ea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:38 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:38.878503197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e4da7053-a39d-422a-a7dd-97886bf543ea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:38 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:38.925788283Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=321bde74-ccfb-4f92-b027-c81a2813c2b2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:38 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:38.926636163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=321bde74-ccfb-4f92-b027-c81a2813c2b2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:38 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:38.927385415Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=321bde74-ccfb-4f92-b027-c81a2813c2b2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:38 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:38.975691221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=30f7ff69-dc31-46a9-923d-38fdbda841e9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:38 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:38.975834874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=30f7ff69-dc31-46a9-923d-38fdbda841e9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:38 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:38.976142823Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=30f7ff69-dc31-46a9-923d-38fdbda841e9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:39 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:39.023838037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=12328b43-2215-4bd7-976d-23aac4b59cf3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:39 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:39.024088010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=12328b43-2215-4bd7-976d-23aac4b59cf3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:39 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:39.024349243Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=12328b43-2215-4bd7-976d-23aac4b59cf3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:39 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:39.077245149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c136a48c-91df-4d64-bcc8-2aa6bd13cb56 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:39 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:39.077406197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c136a48c-91df-4d64-bcc8-2aa6bd13cb56 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:39 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:39.077655726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c136a48c-91df-4d64-bcc8-2aa6bd13cb56 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:39 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:39.128001130Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=713ddaef-b344-47f2-91f1-d3413bad6e88 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:39 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:39.128148363Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=713ddaef-b344-47f2-91f1-d3413bad6e88 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:39 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:39.128394409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=713ddaef-b344-47f2-91f1-d3413bad6e88 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:39 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:39.171809047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=21dfdce9-f83b-43ca-a0ce-20a1c9e2c0d2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:39 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:39.172187331Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=21dfdce9-f83b-43ca-a0ce-20a1c9e2c0d2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:39 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:39.172448371Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=21dfdce9-f83b-43ca-a0ce-20a1c9e2c0d2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID
	78f9412bae3df       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   11 seconds ago      Exited              dashboard-metrics-scraper   1                   749b97f59c0c3
	da5b0f37de36c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   20 seconds ago      Running             storage-provisioner         0                   5df5c7fd58ec2
	b1f0605333fb5       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   20 seconds ago      Running             kubernetes-dashboard        0                   2c997df02d85c
	99f881d576aca       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   24 seconds ago      Running             coredns                     0                   5e0a9fa5c886d
	3848b04f93b16       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   26 seconds ago      Running             kube-proxy                  0                   3f398ab3b89ee
	cf57c2ca5ce6e       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   49 seconds ago      Running             kube-scheduler              0                   88359d0165f40
	426faaf2ad7c3       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   50 seconds ago      Running             kube-controller-manager     0                   0c2ff6f02f0bf
	9f61c3a7d63f2       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   50 seconds ago      Running             kube-apiserver              0                   0af4bcb76df38
	e990afb78f8b2       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   50 seconds ago      Running             etcd                        0                   d30efefc17fce
	
	* 
	* ==> coredns [99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20210813210102-30853
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20210813210102-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=default-k8s-different-port-20210813210102-30853
	                    minikube.k8s.io/updated_at=2021_08_13T21_09_58_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 21:09:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20210813210102-30853
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 21:10:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 21:10:34 +0000   Fri, 13 Aug 2021 21:09:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 21:10:34 +0000   Fri, 13 Aug 2021 21:09:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 21:10:34 +0000   Fri, 13 Aug 2021 21:09:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 21:10:34 +0000   Fri, 13 Aug 2021 21:10:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.136
	  Hostname:    default-k8s-different-port-20210813210102-30853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 a49921801fb044088819eb98af731e4b
	  System UUID:                a4992180-1fb0-4408-8819-eb98af731e4b
	  Boot ID:                    b0749f82-7a44-496b-9b13-eea1ee12d9e8
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-jphw4                                                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     28s
	  kube-system                 etcd-default-k8s-different-port-20210813210102-30853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         42s
	  kube-system                 kube-apiserver-default-k8s-different-port-20210813210102-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20210813210102-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-proxy-jn56d                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-scheduler-default-k8s-different-port-20210813210102-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 metrics-server-7c784ccb57-cdhkk                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (14%!)(MISSING)      0 (0%!)(MISSING)         24s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-265ml                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-bjd2q                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             470Mi (22%!)(MISSING)  170Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  52s (x6 over 52s)  kubelet     Node default-k8s-different-port-20210813210102-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x6 over 52s)  kubelet     Node default-k8s-different-port-20210813210102-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x6 over 52s)  kubelet     Node default-k8s-different-port-20210813210102-30853 status is now: NodeHasSufficientPID
	  Normal  Starting                 35s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s                kubelet     Node default-k8s-different-port-20210813210102-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s                kubelet     Node default-k8s-different-port-20210813210102-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s                kubelet     Node default-k8s-different-port-20210813210102-30853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                29s                kubelet     Node default-k8s-different-port-20210813210102-30853 status is now: NodeReady
	  Normal  Starting                 26s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	*               on the kernel command line
	[  +0.000122] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.444808] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.040700] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.903859] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1725 comm=systemd-network
	[  +0.753003] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.310008] vboxguest: loading out-of-tree module taints kernel.
	[  +0.011903] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug13 21:04] systemd-fstab-generator[2136]: Ignoring "noauto" for root device
	[  +0.141171] systemd-fstab-generator[2149]: Ignoring "noauto" for root device
	[  +0.221343] systemd-fstab-generator[2175]: Ignoring "noauto" for root device
	[  +8.225926] systemd-fstab-generator[2364]: Ignoring "noauto" for root device
	[ +18.064269] kauditd_printk_skb: 38 callbacks suppressed
	[ +13.334152] kauditd_printk_skb: 89 callbacks suppressed
	[Aug13 21:05] kauditd_printk_skb: 2 callbacks suppressed
	[ +35.653969] NFSD: Unable to end grace period: -110
	[Aug13 21:09] kauditd_printk_skb: 14 callbacks suppressed
	[ +26.084804] systemd-fstab-generator[5969]: Ignoring "noauto" for root device
	[ +16.997735] systemd-fstab-generator[6368]: Ignoring "noauto" for root device
	[Aug13 21:10] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.062256] kauditd_printk_skb: 107 callbacks suppressed
	[  +9.334751] kauditd_printk_skb: 8 callbacks suppressed
	[  +8.720038] systemd-fstab-generator[7951]: Ignoring "noauto" for root device
	[  +0.895521] systemd-fstab-generator[8005]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da] <==
	* raft2021/08/13 21:09:49 INFO: 247e73b5d65300e1 is starting a new election at term 1
	raft2021/08/13 21:09:49 INFO: 247e73b5d65300e1 became candidate at term 2
	raft2021/08/13 21:09:49 INFO: 247e73b5d65300e1 received MsgVoteResp from 247e73b5d65300e1 at term 2
	raft2021/08/13 21:09:49 INFO: 247e73b5d65300e1 became leader at term 2
	raft2021/08/13 21:09:49 INFO: raft.node: 247e73b5d65300e1 elected leader 247e73b5d65300e1 at term 2
	2021-08-13 21:09:49.392234 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 21:09:49.395790 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 21:09:49.397137 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 21:09:49.397279 I | etcdserver: published {Name:default-k8s-different-port-20210813210102-30853 ClientURLs:[https://192.168.50.136:2379]} to cluster 736953c025287a25
	2021-08-13 21:09:49.397288 I | embed: ready to serve client requests
	2021-08-13 21:09:49.399410 I | embed: serving client requests on 192.168.50.136:2379
	2021-08-13 21:09:49.399554 I | embed: ready to serve client requests
	2021-08-13 21:09:49.429449 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 21:09:57.419650 W | etcdserver: read-only range request "key:\"/registry/events/default/default-k8s-different-port-20210813210102-30853.169af9e755d5fde7\" " with result "range_response_count:0 size:5" took too long (851.99415ms) to execute
	2021-08-13 21:09:57.421983 W | etcdserver: request "header:<ID:63467390364394589 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-ftchd\" mod_revision:0 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-ftchd\" value_size:904 >> failure:<>>" with result "size:16" took too long (784.275831ms) to execute
	2021-08-13 21:09:57.426171 W | etcdserver: read-only range request "key:\"/registry/minions/default-k8s-different-port-20210813210102-30853\" " with result "range_response_count:1 size:5198" took too long (583.849177ms) to execute
	2021-08-13 21:09:57.427427 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (510.678643ms) to execute
	2021-08-13 21:09:57.432930 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:0 size:5" took too long (677.386887ms) to execute
	2021-08-13 21:10:08.528598 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:10:11.529393 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:1 size:260" took too long (331.774961ms) to execute
	2021-08-13 21:10:11.530079 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:209" took too long (187.767839ms) to execute
	2021-08-13 21:10:12.798229 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:10:22.800622 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:10:25.671627 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:6431" took too long (121.296265ms) to execute
	2021-08-13 21:10:32.799654 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  21:10:39 up 7 min,  0 users,  load average: 2.90, 1.16, 0.51
	Linux default-k8s-different-port-20210813210102-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810] <==
	* Trace[1492505029]: [514.51431ms] [514.51431ms] END
	I0813 21:09:57.430453       1 trace.go:205] Trace[365018303]: "List" url:/api/v1/namespaces/kube-system/limitranges,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 21:09:56.915) (total time: 514ms):
	Trace[365018303]: ---"Listing from storage done" 514ms (21:09:00.430)
	Trace[365018303]: [514.615722ms] [514.615722ms] END
	I0813 21:09:57.436068       1 trace.go:205] Trace[1750305912]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/certificate-controller,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/kube-controller-manager,client:192.168.50.136,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 21:09:56.754) (total time: 681ms):
	Trace[1750305912]: [681.298646ms] [681.298646ms] END
	I0813 21:09:57.439016       1 trace.go:205] Trace[933891466]: "Get" url:/api/v1/nodes/default-k8s-different-port-20210813210102-30853,user-agent:kubeadm/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.50.136,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 21:09:56.841) (total time: 597ms):
	Trace[933891466]: ---"About to write a response" 589ms (21:09:00.430)
	Trace[933891466]: [597.313287ms] [597.313287ms] END
	I0813 21:09:57.440255       1 trace.go:205] Trace[2142592379]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.50.136,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 21:09:56.911) (total time: 528ms):
	Trace[2142592379]: ---"Object stored in database" 527ms (21:09:00.440)
	Trace[2142592379]: [528.920179ms] [528.920179ms] END
	I0813 21:09:57.474192       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 21:09:58.492767       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 21:09:58.561940       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 21:10:04.076723       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 21:10:11.021273       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 21:10:11.155456       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0813 21:10:18.191750       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 21:10:18.192089       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 21:10:18.192169       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 21:10:25.677823       1 client.go:360] parsed scheme: "passthrough"
	I0813 21:10:25.678194       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 21:10:25.678352       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e] <==
	* I0813 21:10:15.738740       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-cdhkk"
	I0813 21:10:16.359648       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 21:10:16.423354       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:16.464393       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:16.465024       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 21:10:16.499070       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:16.501091       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:10:16.521054       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:16.538815       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:10:16.539740       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:16.539765       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:16.573782       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:10:16.574443       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:16.574480       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:10:16.574495       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:16.617137       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:10:16.617478       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:16.617790       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:10:16.619183       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:16.663622       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:10:16.664310       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:16.664639       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:10:16.664771       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:10:16.692948       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-bjd2q"
	I0813 21:10:16.751656       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-265ml"
	
	* 
	* ==> kube-proxy [3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b] <==
	* I0813 21:10:13.350525       1 node.go:172] Successfully retrieved node IP: 192.168.50.136
	I0813 21:10:13.350699       1 server_others.go:140] Detected node IP 192.168.50.136
	W0813 21:10:13.350759       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 21:10:13.513329       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 21:10:13.513436       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 21:10:13.513489       1 server_others.go:212] Using iptables Proxier.
	I0813 21:10:13.514298       1 server.go:643] Version: v1.21.3
	I0813 21:10:13.516087       1 config.go:315] Starting service config controller
	I0813 21:10:13.516113       1 config.go:224] Starting endpoint slice config controller
	I0813 21:10:13.516114       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 21:10:13.516122       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 21:10:13.540156       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 21:10:13.545491       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 21:10:13.616711       1 shared_informer.go:247] Caches are synced for service config 
	I0813 21:10:13.630270       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db] <==
	* E0813 21:09:54.265641       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:09:54.279826       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:09:54.280598       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:09:54.280806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:09:54.281346       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:09:54.281544       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:09:54.281806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:09:54.283528       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:09:54.284973       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:09:54.285135       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:09:54.289619       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:09:54.290099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:09:54.290291       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:09:54.290420       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:09:55.217784       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:09:55.230962       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:09:55.325540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:09:55.326261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:09:55.361384       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:09:55.370306       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:09:55.536650       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:09:55.538248       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:09:55.596821       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:09:55.652988       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0813 21:09:57.262912       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:03:41 UTC, end at Fri 2021-08-13 21:10:39 UTC. --
	Aug 13 21:10:16 default-k8s-different-port-20210813210102-30853 kubelet[6377]: I0813 21:10:16.936598    6377 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a37a9d34-9307-42f7-b165-6aee4b9b2518-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-265ml\" (UID: \"a37a9d34-9307-42f7-b165-6aee4b9b2518\") "
	Aug 13 21:10:17 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:17.872673    6377 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:10:17 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:17.872715    6377 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:10:17 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:17.872950    6377 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-m4lnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe
{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vo
lumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-cdhkk_kube-system(899ed30f-faf1-40e3-9a46-c1ad31aa7f70): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:10:17 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:17.873001    6377 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-cdhkk" podUID=899ed30f-faf1-40e3-9a46-c1ad31aa7f70
	Aug 13 21:10:18 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:18.574192    6377 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-cdhkk" podUID=899ed30f-faf1-40e3-9a46-c1ad31aa7f70
	Aug 13 21:10:25 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:25.252622    6377 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/899ed30f-faf1-40e3-9a46-c1ad31aa7f70/etc-hosts with error exit status 1" pod="kube-system/metrics-server-7c784ccb57-cdhkk"
	Aug 13 21:10:25 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:25.315114    6377 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/a37a9d34-9307-42f7-b165-6aee4b9b2518/etc-hosts with error exit status 1" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-265ml"
	Aug 13 21:10:27 default-k8s-different-port-20210813210102-30853 kubelet[6377]: I0813 21:10:27.911933    6377 scope.go:111] "RemoveContainer" containerID="e0b89b63f5fe921ef6c5e04cea4e87cb0117264752ec780c66173b524339e17f"
	Aug 13 21:10:28 default-k8s-different-port-20210813210102-30853 kubelet[6377]: I0813 21:10:28.922753    6377 scope.go:111] "RemoveContainer" containerID="e0b89b63f5fe921ef6c5e04cea4e87cb0117264752ec780c66173b524339e17f"
	Aug 13 21:10:28 default-k8s-different-port-20210813210102-30853 kubelet[6377]: I0813 21:10:28.923048    6377 scope.go:111] "RemoveContainer" containerID="78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f"
	Aug 13 21:10:28 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:28.923306    6377 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-265ml_kubernetes-dashboard(a37a9d34-9307-42f7-b165-6aee4b9b2518)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-265ml" podUID=a37a9d34-9307-42f7-b165-6aee4b9b2518
	Aug 13 21:10:29 default-k8s-different-port-20210813210102-30853 kubelet[6377]: I0813 21:10:29.934482    6377 scope.go:111] "RemoveContainer" containerID="78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f"
	Aug 13 21:10:29 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:29.934814    6377 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-265ml_kubernetes-dashboard(a37a9d34-9307-42f7-b165-6aee4b9b2518)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-265ml" podUID=a37a9d34-9307-42f7-b165-6aee4b9b2518
	Aug 13 21:10:33 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:33.282772    6377 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:10:33 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:33.282808    6377 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:10:33 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:33.283028    6377 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-m4lnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe
{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vo
lumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-cdhkk_kube-system(899ed30f-faf1-40e3-9a46-c1ad31aa7f70): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:10:33 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:33.283069    6377 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-cdhkk" podUID=899ed30f-faf1-40e3-9a46-c1ad31aa7f70
	Aug 13 21:10:35 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:35.603318    6377 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/899ed30f-faf1-40e3-9a46-c1ad31aa7f70/etc-hosts with error exit status 1" pod="kube-system/metrics-server-7c784ccb57-cdhkk"
	Aug 13 21:10:36 default-k8s-different-port-20210813210102-30853 kubelet[6377]: I0813 21:10:36.773411    6377 scope.go:111] "RemoveContainer" containerID="78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f"
	Aug 13 21:10:36 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:36.776592    6377 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-265ml_kubernetes-dashboard(a37a9d34-9307-42f7-b165-6aee4b9b2518)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-265ml" podUID=a37a9d34-9307-42f7-b165-6aee4b9b2518
	Aug 13 21:10:37 default-k8s-different-port-20210813210102-30853 kubelet[6377]: I0813 21:10:37.004775    6377 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 21:10:37 default-k8s-different-port-20210813210102-30853 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:10:37 default-k8s-different-port-20210813210102-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:10:37 default-k8s-different-port-20210813210102-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b] <==
	* 2021/08/13 21:10:19 Using namespace: kubernetes-dashboard
	2021/08/13 21:10:19 Using in-cluster config to connect to apiserver
	2021/08/13 21:10:19 Using secret token for csrf signing
	2021/08/13 21:10:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:10:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:10:19 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 21:10:19 Generating JWE encryption key
	2021/08/13 21:10:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:10:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:10:19 Initializing JWE encryption key from synchronized object
	2021/08/13 21:10:19 Creating in-cluster Sidecar client
	2021/08/13 21:10:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:10:19 Serving insecurely on HTTP port: 9090
	2021/08/13 21:10:19 Starting overwatch
	
	* 
	* ==> storage-provisioner [da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094] <==
	* I0813 21:10:19.288764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 21:10:19.360044       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 21:10:19.364685       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 21:10:19.392709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 21:10:19.393488       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813210102-30853_5ce06dd5-f19b-4b23-af54-735315d3c3bf!
	I0813 21:10:19.405519       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2da62b6a-0ff9-4a92-b2ac-90266f4c9f83", APIVersion:"v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210813210102-30853_5ce06dd5-f19b-4b23-af54-735315d3c3bf became leader
	I0813 21:10:19.496537       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813210102-30853_5ce06dd5-f19b-4b23-af54-735315d3c3bf!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813210102-30853 -n default-k8s-different-port-20210813210102-30853
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813210102-30853 -n default-k8s-different-port-20210813210102-30853: exit status 2 (282.572589ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20210813210102-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-cdhkk
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20210813210102-30853 describe pod metrics-server-7c784ccb57-cdhkk
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20210813210102-30853 describe pod metrics-server-7c784ccb57-cdhkk: exit status 1 (82.766481ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-cdhkk" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20210813210102-30853 describe pod metrics-server-7c784ccb57-cdhkk: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813210102-30853 -n default-k8s-different-port-20210813210102-30853
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813210102-30853 -n default-k8s-different-port-20210813210102-30853: exit status 2 (271.588329ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20210813210102-30853 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20210813210102-30853 logs -n 25: (1.277294415s)
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:19 UTC | Fri, 13 Aug 2021 21:01:23 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:23 UTC | Fri, 13 Aug 2021 21:01:23 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:15 UTC | Fri, 13 Aug 2021 21:02:15 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:27 UTC | Fri, 13 Aug 2021 21:02:28 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:02 UTC | Fri, 13 Aug 2021 21:03:15 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:26 UTC | Fri, 13 Aug 2021 21:03:27 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:27 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:28 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:32 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:23 UTC | Fri, 13 Aug 2021 21:08:32 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:08:42 UTC | Fri, 13 Aug 2021 21:08:43 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:00 UTC | Fri, 13 Aug 2021 21:08:52 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                         |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:02 UTC | Fri, 13 Aug 2021 21:09:02 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205823-30853                       | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:05 UTC | Fri, 13 Aug 2021 21:09:06 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205823-30853                       | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:07 UTC | Fri, 13 Aug 2021 21:09:09 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:09 UTC | Fri, 13 Aug 2021 21:09:10 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:09:10 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:09:11 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:11 UTC | Fri, 13 Aug 2021 21:09:11 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:10:25 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:36 UTC | Fri, 13 Aug 2021 21:10:36 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813210910-30853 --memory=2200           | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:10:38 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:38 UTC | Fri, 13 Aug 2021 21:10:39 UTC |
	|         | newest-cni-20210813210910-30853                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813210102-30853            | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:38 UTC | Fri, 13 Aug 2021 21:10:39 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:09:10
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:09:10.673379   12791 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:09:10.673452   12791 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:09:10.673457   12791 out.go:311] Setting ErrFile to fd 2...
	I0813 21:09:10.673460   12791 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:09:10.673589   12791 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:09:10.673842   12791 out.go:305] Setting JSON to false
	I0813 21:09:10.710967   12791 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":10313,"bootTime":1628878638,"procs":196,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:09:10.711108   12791 start.go:121] virtualization: kvm guest
	I0813 21:09:10.714392   12791 out.go:177] * [newest-cni-20210813210910-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:09:10.716013   12791 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:09:10.714549   12791 notify.go:169] Checking for updates...
	I0813 21:09:10.717634   12791 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:09:10.719077   12791 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:09:10.720797   12791 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:09:10.721401   12791 config.go:177] Loaded profile config "default-k8s-different-port-20210813210102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:09:10.721555   12791 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:09:10.721780   12791 config.go:177] Loaded profile config "old-k8s-version-20210813205823-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 21:09:10.721849   12791 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:09:10.756752   12791 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 21:09:10.756780   12791 start.go:278] selected driver: kvm2
	I0813 21:09:10.756787   12791 start.go:751] validating driver "kvm2" against <nil>
	I0813 21:09:10.756803   12791 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:09:10.758053   12791 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:09:10.758234   12791 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:09:10.769742   12791 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:09:10.769793   12791 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	W0813 21:09:10.769818   12791 out.go:242] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0813 21:09:10.769965   12791 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 21:09:10.769992   12791 cni.go:93] Creating CNI manager for ""
	I0813 21:09:10.769999   12791 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:09:10.770006   12791 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 21:09:10.770016   12791 start_flags.go:277] config:
	{Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:09:10.770113   12791 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:09:10.772194   12791 out.go:177] * Starting control plane node newest-cni-20210813210910-30853 in cluster newest-cni-20210813210910-30853
	I0813 21:09:10.772225   12791 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:09:10.772278   12791 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 21:09:10.772313   12791 cache.go:56] Caching tarball of preloaded images
	I0813 21:09:10.772443   12791 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 21:09:10.772466   12791 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0813 21:09:10.772616   12791 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json ...
	I0813 21:09:10.772647   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json: {Name:mka76415e48e0242b5a1559d0d7199fac2bfb5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:10.772840   12791 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:09:10.772878   12791 start.go:313] acquiring machines lock for newest-cni-20210813210910-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:09:10.772950   12791 start.go:317] acquired machines lock for "newest-cni-20210813210910-30853" in 46.661µs
	I0813 21:09:10.772977   12791 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVers
ion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:09:10.773061   12791 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 21:09:07.914518   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:08.406958   11447 pod_ready.go:81] duration metric: took 4m0.40016385s waiting for pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace to be "Ready" ...
	E0813 21:09:08.406984   11447 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:09:08.407011   11447 pod_ready.go:38] duration metric: took 4m38.843620331s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:09:08.407047   11447 kubeadm.go:604] restartCluster took 5m2.813329014s
	W0813 21:09:08.407209   11447 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:09:08.407246   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:09:07.902231   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.401905   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.775162   12791 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 21:09:10.775296   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:09:10.775358   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:09:10.786479   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0813 21:09:10.786930   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:09:10.787562   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:09:10.787587   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:09:10.788015   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:09:10.788228   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:10.788398   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:10.788591   12791 start.go:160] libmachine.API.Create for "newest-cni-20210813210910-30853" (driver="kvm2")
	I0813 21:09:10.788640   12791 client.go:168] LocalClient.Create starting
	I0813 21:09:10.788684   12791 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 21:09:10.788746   12791 main.go:130] libmachine: Decoding PEM data...
	I0813 21:09:10.788770   12791 main.go:130] libmachine: Parsing certificate...
	I0813 21:09:10.788912   12791 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 21:09:10.788937   12791 main.go:130] libmachine: Decoding PEM data...
	I0813 21:09:10.788956   12791 main.go:130] libmachine: Parsing certificate...
	I0813 21:09:10.789012   12791 main.go:130] libmachine: Running pre-create checks...
	I0813 21:09:10.789029   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .PreCreateCheck
	I0813 21:09:10.789351   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetConfigRaw
	I0813 21:09:10.789790   12791 main.go:130] libmachine: Creating machine...
	I0813 21:09:10.789804   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Create
	I0813 21:09:10.789932   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Creating KVM machine...
	I0813 21:09:10.792752   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found existing default KVM network
	I0813 21:09:10.794412   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:10.794251   12815 network.go:288] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc000010800] misses:0}
	I0813 21:09:10.794453   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:10.794342   12815 network.go:235] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 21:09:10.817502   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | trying to create private KVM network mk-newest-cni-20210813210910-30853 192.168.39.0/24...
	I0813 21:09:11.103452   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | private KVM network mk-newest-cni-20210813210910-30853 192.168.39.0/24 created
	I0813 21:09:11.103485   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.103368   12815 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:09:11.103509   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853 ...
	I0813 21:09:11.103562   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 21:09:11.103608   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 21:09:11.320966   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.320858   12815 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa...
	I0813 21:09:11.459093   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.458976   12815 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/newest-cni-20210813210910-30853.rawdisk...
	I0813 21:09:11.459148   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Writing magic tar header
	I0813 21:09:11.459177   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Writing SSH key tar header
	I0813 21:09:11.459194   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.459075   12815 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853 ...
	I0813 21:09:11.459223   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853 (perms=drwx------)
	I0813 21:09:11.459288   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853
	I0813 21:09:11.459321   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 21:09:11.459350   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 21:09:11.459373   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 21:09:11.459391   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 21:09:11.459409   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:09:11.459426   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337
	I0813 21:09:11.459444   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 21:09:11.459464   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins
	I0813 21:09:11.459485   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 21:09:11.459500   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home
	I0813 21:09:11.459515   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 21:09:11.459528   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Skipping /home - not owner
	I0813 21:09:11.459546   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Creating domain...
	I0813 21:09:11.488427   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:ee:fb:7e in network default
	I0813 21:09:11.489099   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring networks are active...
	I0813 21:09:11.489140   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:11.491476   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring network default is active
	I0813 21:09:11.491829   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring network mk-newest-cni-20210813210910-30853 is active
	I0813 21:09:11.492457   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Getting domain xml...
	I0813 21:09:11.494775   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Creating domain...
	I0813 21:09:11.955786   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Waiting to get IP...
	I0813 21:09:11.956670   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:11.957315   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:11.957341   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.957262   12815 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 21:09:12.221730   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.222307   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.222349   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:12.222212   12815 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 21:09:12.604662   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.605164   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.605191   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:12.605108   12815 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 21:09:13.029701   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.030156   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.030218   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:13.030122   12815 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 21:09:13.504659   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.505143   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.505173   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:13.505105   12815 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 21:09:14.093824   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.094412   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.094446   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:14.094345   12815 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 21:09:14.929917   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.930509   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.930535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:14.930469   12815 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 21:09:12.902877   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:14.903637   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:15.678952   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:15.679492   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:15.679571   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:15.679462   12815 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 21:09:16.668007   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:16.668572   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:16.668609   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:16.668495   12815 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 21:09:17.859819   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:17.860363   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:17.860390   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:17.860285   12815 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 21:09:19.539855   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:19.540503   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:19.540530   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:19.540442   12815 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 21:09:17.403580   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:19.901370   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:21.902145   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:21.887601   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:21.888130   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:21.888151   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:21.888074   12815 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 21:09:25.255905   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.256490   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Found IP for machine: 192.168.39.210
	I0813 21:09:25.256524   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has current primary IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.256535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Reserving static IP address...
	I0813 21:09:25.256915   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find host DHCP lease matching {name: "newest-cni-20210813210910-30853", mac: "52:54:00:22:60:9f", ip: "192.168.39.210"} in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.303282   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Reserved static IP address: 192.168.39.210
	I0813 21:09:25.303341   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Getting to WaitForSSH function...
	I0813 21:09:25.303352   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Waiting for SSH to be available...
	I0813 21:09:25.309055   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.309442   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:minikube Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.309474   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.309627   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using SSH client type: external
	I0813 21:09:25.309651   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa (-rw-------)
	I0813 21:09:25.309698   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:09:25.309731   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | About to run SSH command:
	I0813 21:09:25.309744   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | exit 0
	I0813 21:09:25.467104   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 21:09:25.467603   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) KVM machine creation complete!
	I0813 21:09:25.467679   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetConfigRaw
	I0813 21:09:25.468310   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:25.468513   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:25.468691   12791 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 21:09:25.468710   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:09:25.471536   12791 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 21:09:25.471555   12791 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 21:09:25.471565   12791 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 21:09:25.471575   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.476123   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.476450   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.476479   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.476604   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.476755   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.476933   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.477105   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.477284   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.477466   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.477480   12791 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 21:09:25.594161   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:09:25.594190   12791 main.go:130] libmachine: Detecting the provisioner...
	I0813 21:09:25.594203   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.600130   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.600531   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.600564   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.600765   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.600974   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.601151   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.601303   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.601456   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.601620   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.601635   12791 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 21:09:22.392237   11600 pod_ready.go:81] duration metric: took 4m0.007094721s waiting for pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace to be "Ready" ...
	E0813 21:09:22.392261   11600 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:09:22.392283   11600 pod_ready.go:38] duration metric: took 4m14.135839126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:09:22.392312   11600 kubeadm.go:604] restartCluster took 4m52.280117973s
	W0813 21:09:22.392448   11600 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:09:22.392485   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:09:25.715874   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 21:09:25.715991   12791 main.go:130] libmachine: found compatible host: buildroot
	I0813 21:09:25.716007   12791 main.go:130] libmachine: Provisioning with buildroot...
	I0813 21:09:25.716023   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:25.716285   12791 buildroot.go:166] provisioning hostname "newest-cni-20210813210910-30853"
	I0813 21:09:25.716311   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:25.716475   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.722141   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.722535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.722575   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.722814   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.723002   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.723169   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.723323   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.723458   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.723611   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.723626   12791 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210813210910-30853 && echo "newest-cni-20210813210910-30853" | sudo tee /etc/hostname
	I0813 21:09:25.855120   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210813210910-30853
	
	I0813 21:09:25.855151   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.861182   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.861544   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.861567   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.861715   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.861922   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.862087   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.862214   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.862344   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.862548   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.862577   12791 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210813210910-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210813210910-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210813210910-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:09:25.982023   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:09:25.982082   12791 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:09:25.982118   12791 buildroot.go:174] setting up certificates
	I0813 21:09:25.982134   12791 provision.go:83] configureAuth start
	I0813 21:09:25.982150   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:25.982399   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:25.988009   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.988348   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.988380   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.988535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.993579   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.993994   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.994024   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.994249   12791 provision.go:138] copyHostCerts
	I0813 21:09:25.994336   12791 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:09:25.994347   12791 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:09:25.994396   12791 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:09:25.994483   12791 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:09:25.994497   12791 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:09:25.994532   12791 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:09:25.994643   12791 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:09:25.994656   12791 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:09:25.994688   12791 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 21:09:25.994760   12791 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210813210910-30853 san=[192.168.39.210 192.168.39.210 localhost 127.0.0.1 minikube newest-cni-20210813210910-30853]
	I0813 21:09:26.305745   12791 provision.go:172] copyRemoteCerts
	I0813 21:09:26.305810   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:09:26.305840   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:26.311502   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.311880   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:26.311916   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.312018   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:26.312266   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:26.312474   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:26.312635   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:26.397917   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:09:26.415261   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 21:09:26.432018   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:09:26.448392   12791 provision.go:86] duration metric: configureAuth took 466.244488ms
	I0813 21:09:26.448413   12791 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:09:26.448550   12791 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:09:26.448647   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:26.453886   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.454235   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:26.454267   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.454404   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:26.454578   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:26.454719   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:26.454882   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:26.455020   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:26.455171   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:26.455193   12791 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 21:09:27.218253   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 21:09:27.218291   12791 main.go:130] libmachine: Checking connection to Docker...
	I0813 21:09:27.218304   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetURL
	I0813 21:09:27.220942   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using libvirt version 3000000
	I0813 21:09:27.225565   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.225908   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.225955   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.226230   12791 main.go:130] libmachine: Docker is up and running!
	I0813 21:09:27.226255   12791 main.go:130] libmachine: Reticulating splines...
	I0813 21:09:27.226262   12791 client.go:171] LocalClient.Create took 16.437611332s
	I0813 21:09:27.226308   12791 start.go:168] duration metric: libmachine.API.Create for "newest-cni-20210813210910-30853" took 16.437720973s
	I0813 21:09:27.226319   12791 start.go:267] post-start starting for "newest-cni-20210813210910-30853" (driver="kvm2")
	I0813 21:09:27.226323   12791 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:09:27.226339   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.226579   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:09:27.226605   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:27.231167   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.231514   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.231541   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.231723   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:27.231888   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:27.232115   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:27.232258   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:27.318810   12791 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:09:27.324679   12791 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:09:27.324708   12791 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:09:27.324766   12791 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:09:27.324867   12791 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 21:09:27.324993   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:09:27.332665   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:09:27.349495   12791 start.go:270] post-start completed in 123.164223ms
	I0813 21:09:27.349583   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetConfigRaw
	I0813 21:09:27.350235   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:27.356173   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.356503   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.356569   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.356804   12791 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json ...
	I0813 21:09:27.357034   12791 start.go:129] duration metric: createHost completed in 16.583958717s
	I0813 21:09:27.357054   12791 start.go:80] releasing machines lock for "newest-cni-20210813210910-30853", held for 16.584089955s
	I0813 21:09:27.357097   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.357282   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:27.361779   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.362087   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.362122   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.362275   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.362445   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.362924   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.363133   12791 ssh_runner.go:149] Run: systemctl --version
	I0813 21:09:27.363160   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:27.363219   12791 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:09:27.363264   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:27.368253   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.368519   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.368556   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.368628   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:27.368784   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:27.368919   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:27.369055   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:27.369149   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.369521   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.369556   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.369717   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:27.369863   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:27.369979   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:27.370099   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:27.452425   12791 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:09:27.452543   12791 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:09:31.448706   12791 ssh_runner.go:189] Completed: sudo crictl images --output json: (3.996135455s)
	I0813 21:09:31.448838   12791 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 21:09:31.448901   12791 ssh_runner.go:149] Run: which lz4
	I0813 21:09:31.453326   12791 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:09:31.458022   12791 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:09:31.458058   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (590981257 bytes)
	I0813 21:09:34.040840   12791 crio.go:362] Took 2.587545 seconds to copy over tarball
	I0813 21:09:34.040960   12791 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:09:39.662568   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.255292287s)
	I0813 21:09:39.662654   11447 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:09:39.679831   11447 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:09:39.679928   11447 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:09:39.725756   11447 cri.go:76] found id: ""
	I0813 21:09:39.725838   11447 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:09:39.734367   11447 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:09:39.743419   11447 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:09:39.743465   11447 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:09:39.046178   12791 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.005181631s)
	I0813 21:09:39.046212   12791 crio.go:369] Took 5.005343 seconds t extract the tarball
	I0813 21:09:39.046225   12791 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:09:39.096327   12791 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 21:09:39.108664   12791 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 21:09:39.120896   12791 docker.go:153] disabling docker service ...
	I0813 21:09:39.120956   12791 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:09:39.132781   12791 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:09:39.144772   12791 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:09:39.291366   12791 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:09:39.473805   12791 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:09:39.488990   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:09:39.508851   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 21:09:39.519787   12791 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:09:39.527766   12791 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:09:39.527827   12791 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:09:39.549292   12791 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:09:39.557653   12791 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:09:39.695889   12791 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 21:09:39.852538   12791 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 21:09:39.852673   12791 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 21:09:39.865143   12791 start.go:413] Will wait 60s for crictl version
	I0813 21:09:39.865219   12791 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:09:39.902891   12791 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 21:09:39.902976   12791 ssh_runner.go:149] Run: crio --version
	I0813 21:09:40.146285   12791 ssh_runner.go:149] Run: crio --version
	I0813 21:09:44.881949   11447 out.go:204]   - Generating certificates and keys ...
	I0813 21:09:44.881970   12791 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.2 ...
	I0813 21:09:44.882025   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:44.888023   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:44.888330   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:44.888361   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:44.888544   12791 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 21:09:44.893252   12791 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:09:44.903812   12791 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/client.crt
	I0813 21:09:44.903997   12791 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/client.key
	I0813 21:09:44.922443   12791 out.go:177]   - kubelet.network-plugin=cni
	I0813 21:09:44.923908   12791 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0813 21:09:44.923979   12791 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:09:44.924054   12791 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:09:45.004762   12791 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:09:45.004791   12791 crio.go:333] Images already preloaded, skipping extraction
	I0813 21:09:45.004856   12791 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:09:45.042121   12791 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:09:45.042150   12791 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:09:45.042226   12791 ssh_runner.go:149] Run: crio config
	I0813 21:09:45.253009   12791 cni.go:93] Creating CNI manager for ""
	I0813 21:09:45.253045   12791 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:09:45.253059   12791 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0813 21:09:45.253078   12791 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.39.210 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210813210910-30853 NodeName:newest-cni-20210813210910-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-el
ect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.210 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:09:45.253242   12791 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "newest-cni-20210813210910-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:09:45.253382   12791 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210813210910-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.210 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:09:45.253451   12791 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 21:09:45.260928   12791 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:09:45.260983   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:09:45.268144   12791 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (554 bytes)
	I0813 21:09:45.280833   12791 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 21:09:45.293352   12791 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I0813 21:09:45.306281   12791 ssh_runner.go:149] Run: grep 192.168.39.210	control-plane.minikube.internal$ /etc/hosts
	I0813 21:09:45.310235   12791 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:09:45.322126   12791 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853 for IP: 192.168.39.210
	I0813 21:09:45.322191   12791 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:09:45.322212   12791 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:09:45.322281   12791 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/client.key
	I0813 21:09:45.322307   12791 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a
	I0813 21:09:45.322319   12791 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a with IP's: [192.168.39.210 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 21:09:45.521630   12791 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a ...
	I0813 21:09:45.521662   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a: {Name:mk4aa4db18dba264c364eea6455fafca6541c687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.521857   12791 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a ...
	I0813 21:09:45.521869   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a: {Name:mk4bafabda5b550064b81d0be7e6d613e7cbe853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.521953   12791 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt
	I0813 21:09:45.522012   12791 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key
	I0813 21:09:45.522063   12791 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key
	I0813 21:09:45.522071   12791 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt with IP's: []
	I0813 21:09:45.572044   12791 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt ...
	I0813 21:09:45.572072   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt: {Name:mk46480092ca0ddfdbb22ced231c8543e6fadff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.572258   12791 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key ...
	I0813 21:09:45.572270   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key: {Name:mk2ff838c1ce904cf05995003085f2c953d17b54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.572443   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 21:09:45.572486   12791 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 21:09:45.572497   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:09:45.572520   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:09:45.572550   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:09:45.572575   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 21:09:45.572620   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:09:45.573530   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:09:45.591406   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:09:45.607675   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:09:45.623382   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 21:09:44.885025   11447 out.go:204]   - Booting up control plane ...
	I0813 21:09:45.638600   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:09:45.655496   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 21:09:45.672748   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:09:45.690934   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 21:09:45.709394   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:09:45.727886   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 21:09:45.747118   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 21:09:45.764623   12791 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:09:45.776487   12791 ssh_runner.go:149] Run: openssl version
	I0813 21:09:45.782506   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:09:45.790602   12791 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:09:45.795798   12791 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:09:45.795845   12791 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:09:45.801633   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:09:45.809459   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 21:09:45.817086   12791 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 21:09:45.821525   12791 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 21:09:45.821581   12791 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 21:09:45.827427   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 21:09:45.835137   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 21:09:45.843222   12791 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 21:09:45.848030   12791 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 21:09:45.848070   12791 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 21:09:45.854871   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:09:45.863382   12791 kubeadm.go:390] StartCluster: {Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.
0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:09:45.863483   12791 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 21:09:45.863550   12791 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:09:45.897179   12791 cri.go:76] found id: ""
	I0813 21:09:45.897265   12791 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:09:45.904791   12791 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:09:45.911599   12791 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:09:45.918334   12791 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:09:45.918383   12791 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:09:57.982116   11447 out.go:204]   - Configuring RBAC rules ...
	I0813 21:09:58.584325   11447 cni.go:93] Creating CNI manager for ""
	I0813 21:09:58.584349   11447 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:10:00.460094   12791 out.go:204]   - Generating certificates and keys ...
	I0813 21:09:58.586084   11447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:09:58.586145   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:09:58.603522   11447 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:09:58.627002   11447 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:09:58.627101   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:09:58.627103   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=default-k8s-different-port-20210813210102-30853 minikube.k8s.io/updated_at=2021_08_13T21_09_58_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:09:59.050930   11447 ops.go:34] apiserver oom_adj: -16
	I0813 21:09:59.051059   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:09:59.695711   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:00.195937   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:00.695450   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:03.003248   12791 out.go:204]   - Booting up control plane ...
	I0813 21:10:01.195565   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:01.695971   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:02.195512   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:02.696069   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:03.195960   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:03.696007   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:04.195636   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:04.695628   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:05.195701   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:05.695999   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.044352   11600 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (46.651842681s)
	I0813 21:10:09.044429   11600 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:10:09.059478   11600 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:10:09.059553   11600 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:10:09.093284   11600 cri.go:76] found id: ""
	I0813 21:10:09.093381   11600 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:10:09.100568   11600 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:10:09.107226   11600 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:10:09.107269   11600 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:10:06.195800   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:06.695240   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:07.195746   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:07.695213   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:08.195912   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:08.695965   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.195595   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.696049   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:10.195131   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:10.695293   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.730908   11600 out.go:204]   - Generating certificates and keys ...
	I0813 21:10:11.196059   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:11.534135   11447 kubeadm.go:985] duration metric: took 12.907094032s to wait for elevateKubeSystemPrivileges.
	I0813 21:10:11.534170   11447 kubeadm.go:392] StartCluster complete in 6m5.98958255s
	I0813 21:10:11.534191   11447 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:11.534316   11447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:10:11.535601   11447 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:12.110091   11447 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210813210102-30853" rescaled to 1
	I0813 21:10:12.110179   11447 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.136 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:10:12.112084   11447 out.go:177] * Verifying Kubernetes components...
	I0813 21:10:12.110253   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:10:12.112158   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:12.110569   11447 config.go:177] Loaded profile config "default-k8s-different-port-20210813210102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:10:12.110623   11447 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:10:12.112334   11447 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112337   11447 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112351   11447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112358   11447 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.112366   11447 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:10:12.112400   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.112736   11447 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112752   11447 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.112760   11447 addons.go:147] addon metrics-server should already be in state true
	I0813 21:10:12.112763   11447 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112774   11447 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.112783   11447 addons.go:147] addon dashboard should already be in state true
	I0813 21:10:12.112784   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.112802   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.112857   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.112894   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.112750   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.113192   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.113201   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.113224   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.113233   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.113340   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.140644   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41549
	I0813 21:10:12.140642   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35329
	I0813 21:10:12.140661   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0813 21:10:12.141348   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.141465   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.141541   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.141935   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.141953   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.142074   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.142081   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.142089   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.142093   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.142438   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.142486   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.143136   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.143176   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.143388   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.143929   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.143972   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.144251   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.144301   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0813 21:10:12.144729   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.145337   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.145357   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.145698   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.146348   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.146380   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.161135   11447 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.161159   11447 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:10:12.161188   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.161594   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.161636   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.161853   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34203
	I0813 21:10:12.161878   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43143
	I0813 21:10:12.162218   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.162412   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.162720   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.162740   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.162900   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.162921   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.163146   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.163294   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.166669   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.169181   11447 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:10:12.169252   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:10:12.169267   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:10:12.167214   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.169288   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.169571   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.173910   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.175978   11447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:10:12.176070   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0813 21:10:12.176093   11447 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:12.176103   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:10:12.176120   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.175639   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.176186   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.176216   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.175916   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39465
	I0813 21:10:12.176232   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:12.176420   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.176469   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.176549   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.176672   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.176869   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.177027   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.177041   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.177293   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.177308   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.177366   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.177663   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.177782   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.178349   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.178391   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.181885   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.183919   11447 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:10:12.182804   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.183976   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.184012   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.183416   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:10.812498   11600 out.go:204]   - Booting up control plane ...
	I0813 21:10:12.186349   11447 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:10:12.186413   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:10:12.184193   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.186427   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:10:12.186446   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.186621   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.186808   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.190702   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35615
	I0813 21:10:12.191063   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.191556   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.191584   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.191977   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.192165   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.192357   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.192757   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.192786   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.192929   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:12.193084   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.193242   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.193363   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.195129   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.195341   11447 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:12.195358   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:10:12.195378   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.200908   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.201282   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.201309   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.201443   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:12.201571   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.201711   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.201825   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.425248   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:12.468978   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:10:12.469021   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:10:12.494701   11447 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210813210102-30853" to be "Ready" ...
	I0813 21:10:12.495206   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:10:12.499329   11447 node_ready.go:49] node "default-k8s-different-port-20210813210102-30853" has status "Ready":"True"
	I0813 21:10:12.499359   11447 node_ready.go:38] duration metric: took 4.621451ms waiting for node "default-k8s-different-port-20210813210102-30853" to be "Ready" ...
	I0813 21:10:12.499373   11447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:12.499757   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:12.510602   11447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:12.610525   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:10:12.610562   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:10:12.656245   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:10:12.656276   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:10:12.772157   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:10:12.772191   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:10:12.815178   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:10:12.815208   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:10:12.932243   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:10:12.932272   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:10:12.992201   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:10:13.151328   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:10:13.151358   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:10:13.272742   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:10:13.272771   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:10:13.504799   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:10:13.504829   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:10:13.711447   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:10:13.711476   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:10:13.833690   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:10:13.833722   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:10:13.907807   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:10:13.907839   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:10:14.189833   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:10:14.535190   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:15.411080   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.985781369s)
	I0813 21:10:15.411145   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.411139   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.91134851s)
	I0813 21:10:15.411163   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.411180   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.411211   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.411243   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.916004514s)
	I0813 21:10:15.411301   11447 start.go:728] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS
	I0813 21:10:15.412648   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.412658   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.412711   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.412721   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.412731   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.412738   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.412765   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.412779   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.412797   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.412740   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.413131   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.413156   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.413170   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.413203   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.413207   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.413222   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.413245   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.413261   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.413535   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.413550   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:16.138255   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.145991542s)
	I0813 21:10:16.138325   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:16.138339   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:16.138639   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:16.138660   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:16.138663   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:16.138692   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:16.138702   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:16.138996   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:16.139040   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:16.139056   11447 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:16.138998   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:16.609336   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:17.060932   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.871038717s)
	I0813 21:10:17.061005   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:17.061023   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:17.061327   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:17.061348   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:17.061358   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:17.061349   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:17.061370   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:17.061708   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:17.061715   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:17.061777   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:17.064437   11447 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 21:10:17.064471   11447 addons.go:344] enableAddons completed in 4.953854482s
	I0813 21:10:19.033855   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:21.685414   12791 out.go:204]   - Configuring RBAC rules ...
	I0813 21:10:22.697730   12791 cni.go:93] Creating CNI manager for ""
	I0813 21:10:22.697758   12791 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:10:22.699669   12791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:10:22.699748   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:10:22.711081   12791 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:10:22.740715   12791 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:10:22.740845   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:22.740928   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=newest-cni-20210813210910-30853 minikube.k8s.io/updated_at=2021_08_13T21_10_22_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:23.063141   12791 ops.go:34] apiserver oom_adj: -16
	I0813 21:10:23.063228   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:23.680146   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:24.179617   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:24.680324   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:25.180108   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:21.530978   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:22.032299   11447 pod_ready.go:92] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:22.032329   11447 pod_ready.go:81] duration metric: took 9.521694058s waiting for pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:22.032343   11447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.052078   11447 pod_ready.go:102] pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:24.548192   11447 pod_ready.go:97] error getting pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-xmqhp" not found
	I0813 21:10:24.548233   11447 pod_ready.go:81] duration metric: took 2.515881289s waiting for pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace to be "Ready" ...
	E0813 21:10:24.548247   11447 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-xmqhp" not found
	I0813 21:10:24.548257   11447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.554129   11447 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.554154   11447 pod_ready.go:81] duration metric: took 5.887843ms waiting for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.554167   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.559840   11447 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.559859   11447 pod_ready.go:81] duration metric: took 5.68331ms waiting for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.559871   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.565198   11447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.565217   11447 pod_ready.go:81] duration metric: took 5.336694ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.565226   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jn56d" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.571811   11447 pod_ready.go:92] pod "kube-proxy-jn56d" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.571827   11447 pod_ready.go:81] duration metric: took 6.594619ms waiting for pod "kube-proxy-jn56d" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.571837   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.749142   11447 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.749167   11447 pod_ready.go:81] duration metric: took 177.31996ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.749179   11447 pod_ready.go:38] duration metric: took 12.249789309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:24.749199   11447 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:10:24.749257   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:10:24.784712   11447 api_server.go:70] duration metric: took 12.674498021s to wait for apiserver process to appear ...
	I0813 21:10:24.784740   11447 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:10:24.784753   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:10:24.793567   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 200:
	ok
	I0813 21:10:24.794892   11447 api_server.go:139] control plane version: v1.21.3
	I0813 21:10:24.794914   11447 api_server.go:129] duration metric: took 10.167822ms to wait for apiserver health ...
	I0813 21:10:24.794925   11447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:10:24.951664   11447 system_pods.go:59] 8 kube-system pods found
	I0813 21:10:24.951701   11447 system_pods.go:61] "coredns-558bd4d5db-jphw4" [057e9392-38dd-4c71-a09d-83ae9055347e] Running
	I0813 21:10:24.951709   11447 system_pods.go:61] "etcd-default-k8s-different-port-20210813210102-30853" [663c755b-7d29-4114-a1ff-2216c7e74716] Running
	I0813 21:10:24.951717   11447 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813210102-30853" [74f98aff-af48-4328-bee1-8f02162674db] Running
	I0813 21:10:24.951726   11447 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813210102-30853" [77d2d0a4-2421-4895-af76-15c395e6c465] Running
	I0813 21:10:24.951731   11447 system_pods.go:61] "kube-proxy-jn56d" [bf9beff3-8f15-4901-9886-ef5f0d821182] Running
	I0813 21:10:24.951736   11447 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813210102-30853" [21fdb84c-27b1-4592-8914-bf32c1b56ecf] Running
	I0813 21:10:24.951745   11447 system_pods.go:61] "metrics-server-7c784ccb57-cdhkk" [899ed30f-faf1-40e3-9a46-c1ad31aa7f70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:10:24.951753   11447 system_pods.go:61] "storage-provisioner" [3b577536-5550-42ee-a361-275f78e67c9e] Running
	I0813 21:10:24.951765   11447 system_pods.go:74] duration metric: took 156.833527ms to wait for pod list to return data ...
	I0813 21:10:24.951775   11447 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:10:25.148940   11447 default_sa.go:45] found service account: "default"
	I0813 21:10:25.148969   11447 default_sa.go:55] duration metric: took 197.176977ms for default service account to be created ...
	I0813 21:10:25.148984   11447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:10:25.352044   11447 system_pods.go:86] 8 kube-system pods found
	I0813 21:10:25.352084   11447 system_pods.go:89] "coredns-558bd4d5db-jphw4" [057e9392-38dd-4c71-a09d-83ae9055347e] Running
	I0813 21:10:25.352096   11447 system_pods.go:89] "etcd-default-k8s-different-port-20210813210102-30853" [663c755b-7d29-4114-a1ff-2216c7e74716] Running
	I0813 21:10:25.352103   11447 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210813210102-30853" [74f98aff-af48-4328-bee1-8f02162674db] Running
	I0813 21:10:25.352112   11447 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210813210102-30853" [77d2d0a4-2421-4895-af76-15c395e6c465] Running
	I0813 21:10:25.352119   11447 system_pods.go:89] "kube-proxy-jn56d" [bf9beff3-8f15-4901-9886-ef5f0d821182] Running
	I0813 21:10:25.352129   11447 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210813210102-30853" [21fdb84c-27b1-4592-8914-bf32c1b56ecf] Running
	I0813 21:10:25.352141   11447 system_pods.go:89] "metrics-server-7c784ccb57-cdhkk" [899ed30f-faf1-40e3-9a46-c1ad31aa7f70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:10:25.352150   11447 system_pods.go:89] "storage-provisioner" [3b577536-5550-42ee-a361-275f78e67c9e] Running
	I0813 21:10:25.352160   11447 system_pods.go:126] duration metric: took 203.170374ms to wait for k8s-apps to be running ...
	I0813 21:10:25.352177   11447 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:10:25.352232   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:25.366009   11447 system_svc.go:56] duration metric: took 13.82353ms WaitForService to wait for kubelet.
	I0813 21:10:25.366041   11447 kubeadm.go:547] duration metric: took 13.255833147s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:10:25.366078   11447 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:10:25.671992   11447 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:10:25.672026   11447 node_conditions.go:123] node cpu capacity is 2
	I0813 21:10:25.672045   11447 node_conditions.go:105] duration metric: took 305.961488ms to run NodePressure ...
	I0813 21:10:25.672058   11447 start.go:231] waiting for startup goroutines ...
	I0813 21:10:25.741468   11447 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 21:10:25.743555   11447 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210813210102-30853" cluster and "default" namespace by default
	I0813 21:10:29.004104   11600 out.go:204]   - Configuring RBAC rules ...
	I0813 21:10:29.713525   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:10:29.713570   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:10:25.680008   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:26.180477   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:26.680294   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:27.180411   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:27.679956   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:28.179559   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:28.679596   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.179509   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.679704   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:30.180325   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.715719   11600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:10:29.715784   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:10:29.736151   11600 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:10:29.781971   11600 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:10:29.782030   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.782090   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=no-preload-20210813205915-30853 minikube.k8s.io/updated_at=2021_08_13T21_10_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.830681   11600 ops.go:34] apiserver oom_adj: -16
	I0813 21:10:30.150647   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:30.779463   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.280355   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.779613   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:30.680059   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.180084   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.679975   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:32.179732   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:32.679873   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.179878   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.679567   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.180100   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.679513   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.825619   12791 kubeadm.go:985] duration metric: took 12.084819945s to wait for elevateKubeSystemPrivileges.
	I0813 21:10:34.825653   12791 kubeadm.go:392] StartCluster complete in 48.962278505s
	I0813 21:10:34.825676   12791 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:34.825790   12791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:10:34.827844   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:35.357758   12791 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210813210910-30853" rescaled to 1
	I0813 21:10:35.357830   12791 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:10:35.359667   12791 out.go:177] * Verifying Kubernetes components...
	I0813 21:10:35.357884   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:10:35.357927   12791 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 21:10:35.358131   12791 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:10:35.359798   12791 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210813210910-30853"
	I0813 21:10:35.359818   12791 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:35.359820   12791 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210813210910-30853"
	W0813 21:10:35.359828   12791 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:10:35.359855   12791 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:10:35.359852   12791 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210813210910-30853"
	I0813 21:10:35.359908   12791 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210813210910-30853"
	I0813 21:10:35.360333   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.360381   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.360414   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.360455   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.374986   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42203
	I0813 21:10:35.375050   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0813 21:10:35.375635   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.375910   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.377813   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.377836   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.377912   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.377925   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.378238   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.378810   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.378869   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.379811   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.380004   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:10:35.391384   12791 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210813210910-30853"
	W0813 21:10:35.391410   12791 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:10:35.391438   12791 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:10:35.391832   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.391897   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.391999   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0813 21:10:35.392393   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.392989   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.393014   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.393496   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.393691   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:10:35.397628   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:10:35.400074   12791 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:10:35.400221   12791 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:35.400233   12791 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:10:35.400253   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:10:35.406732   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39751
	I0813 21:10:35.407200   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.407553   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.407703   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.407724   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.408324   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:10:35.408333   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:10:35.408348   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.408363   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.408489   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:10:35.408643   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:10:35.408815   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:10:35.409189   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.409266   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.424756   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0813 21:10:35.425178   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.425688   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.425717   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.426032   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.426208   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:10:35.429530   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:10:35.429754   12791 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:35.429775   12791 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:10:35.429797   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:10:35.436000   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.436628   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:10:35.436664   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.436775   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:10:35.436942   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:10:35.437117   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:10:35.437291   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:10:35.594125   12791 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:32.279420   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:32.780066   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.280227   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.779756   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.280100   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.779428   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:35.279470   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:35.779478   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:36.279401   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:36.779390   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:35.796621   12791 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:36.020007   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:10:36.022097   12791 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:10:36.022141   12791 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:10:37.953285   12791 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.359113303s)
	I0813 21:10:37.953357   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:37.953374   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:37.953716   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:37.953737   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:37.953747   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:37.953764   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:37.954032   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:37.954047   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.018145   12791 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.221484906s)
	I0813 21:10:38.018195   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:38.018210   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:38.018146   12791 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.995992413s)
	I0813 21:10:38.018276   12791 api_server.go:70] duration metric: took 2.660410949s to wait for apiserver process to appear ...
	I0813 21:10:38.018284   12791 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:10:38.018293   12791 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:10:38.018510   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:38.018529   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.018538   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:38.018547   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:38.018806   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:38.018828   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.018842   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:38.018866   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:38.019228   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:10:38.019231   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:38.019253   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.021307   12791 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 21:10:38.021330   12791 addons.go:344] enableAddons completed in 2.663409626s
	I0813 21:10:38.037183   12791 api_server.go:265] https://192.168.39.210:8443/healthz returned 200:
	ok
	I0813 21:10:38.040155   12791 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:10:38.040215   12791 api_server.go:129] duration metric: took 21.924445ms to wait for apiserver health ...
	I0813 21:10:38.040228   12791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:10:38.072532   12791 system_pods.go:59] 8 kube-system pods found
	I0813 21:10:38.072583   12791 system_pods.go:61] "coredns-78fcd69978-42frp" [ffc12ff0-fe4e-422b-ae81-83f17416e379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:10:38.072594   12791 system_pods.go:61] "coredns-78fcd69978-bc587" [0d2dab50-994b-4314-8922-0e8a913a9b26] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:10:38.072605   12791 system_pods.go:61] "etcd-newest-cni-20210813210910-30853" [a6811fb7-a94c-45db-91d0-34c033aa1eab] Running
	I0813 21:10:38.072623   12791 system_pods.go:61] "kube-apiserver-newest-cni-20210813210910-30853" [bdcdda0b-8c06-4c71-8f0a-66d55d331267] Running
	I0813 21:10:38.072630   12791 system_pods.go:61] "kube-controller-manager-newest-cni-20210813210910-30853" [374fba93-8efe-439f-8aec-50ae02d227e3] Running
	I0813 21:10:38.072639   12791 system_pods.go:61] "kube-proxy-qt9ld" [4e36061f-0559-4cde-9b0a-b5cb328d0d76] Running
	I0813 21:10:38.072646   12791 system_pods.go:61] "kube-scheduler-newest-cni-20210813210910-30853" [bdf4950a-8d5e-434c-8c99-20e475c71f65] Running
	I0813 21:10:38.072656   12791 system_pods.go:61] "storage-provisioner" [5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 21:10:38.072667   12791 system_pods.go:74] duration metric: took 32.432184ms to wait for pod list to return data ...
	I0813 21:10:38.072681   12791 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:10:38.079488   12791 default_sa.go:45] found service account: "default"
	I0813 21:10:38.079509   12791 default_sa.go:55] duration metric: took 6.821814ms for default service account to be created ...
	I0813 21:10:38.079522   12791 kubeadm.go:547] duration metric: took 2.721660353s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0813 21:10:38.079544   12791 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:10:38.087838   12791 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.06779332s)
	I0813 21:10:38.087870   12791 start.go:728] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 21:10:38.089094   12791 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:10:38.089130   12791 node_conditions.go:123] node cpu capacity is 2
	I0813 21:10:38.089146   12791 node_conditions.go:105] duration metric: took 9.595836ms to run NodePressure ...
	I0813 21:10:38.089160   12791 start.go:231] waiting for startup goroutines ...
	I0813 21:10:38.151075   12791 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 21:10:38.152833   12791 out.go:177] 
	W0813 21:10:38.153012   12791 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 21:10:38.154648   12791 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:10:38.156287   12791 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813210910-30853" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 21:03:41 UTC, end at Fri 2021-08-13 21:10:41 UTC. --
	Aug 13 21:10:39 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:39.904804211Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,StartedAt:1628889019139243918,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a361-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/3b577536-5550-42ee-a361-275f78e67c9e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/3b577536-5550-42ee-a361-275f78e67c9e/containers/storage-provisioner/b1328521,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/3b577536-5550-42ee-a361-275f78e67c9e/volumes/kubernetes.io~projected/kube-api-access-6d9wg,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_3b577536-5550-42ee-a361-275f
78e67c9e/storage-provisioner/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=97ee2f27-e621-4b5d-984d-29e3eba76ff1 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.002582418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dc811a62-b91e-4ac4-8195-15547e234a26 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.002647142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dc811a62-b91e-4ac4-8195-15547e234a26 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.003020251Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dc811a62-b91e-4ac4-8195-15547e234a26 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.045763977Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=898bad6d-8d9f-4adc-a410-09bf42219b3c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.045826571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=898bad6d-8d9f-4adc-a410-09bf42219b3c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.046175639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=898bad6d-8d9f-4adc-a410-09bf42219b3c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.090048350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0d5198a5-eee9-4e63-989f-94a9dc919fef name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.090148177Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0d5198a5-eee9-4e63-989f-94a9dc919fef name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.090347531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0d5198a5-eee9-4e63-989f-94a9dc919fef name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.133708017Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e39d02d9-dd8f-4052-bac1-7c0671eb1282 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.133771662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e39d02d9-dd8f-4052-bac1-7c0671eb1282 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.134143884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e39d02d9-dd8f-4052-bac1-7c0671eb1282 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.178293303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8efe1826-8b17-47f1-b649-0c51b4151ec8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.178375918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8efe1826-8b17-47f1-b649-0c51b4151ec8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.178634907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8efe1826-8b17-47f1-b649-0c51b4151ec8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.224188973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4a069fc6-993e-4f0c-8117-20158442058f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.224370142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4a069fc6-993e-4f0c-8117-20158442058f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.224639383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4a069fc6-993e-4f0c-8117-20158442058f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.272000541Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=67589a2f-8409-4cdf-af63-e5cc0af69ae1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.272145795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=67589a2f-8409-4cdf-af63-e5cc0af69ae1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.272344915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=67589a2f-8409-4cdf-af63-e5cc0af69ae1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.317600186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a9af410d-c93b-428f-94d0-71e1ce071277 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.317686894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a9af410d-c93b-428f-94d0-71e1ce071277 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:10:41 default-k8s-different-port-20210813210102-30853 crio[2043]: time="2021-08-13 21:10:41.318043056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f,PodSandboxId:749b97f59c0c3dd60015c9cec33eaf842b7619c22a1ebfe5c82453e3787b2db8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889028167294911,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-265ml,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a37a9d34-9307-42f7-b165-6aee4b9b2518,},Annotations:map[string]string{io.kubernetes.container.hash:
de9f1421,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094,PodSandboxId:5df5c7fd58ec2c15b2de0729d35ff78df393c4110e6cbbb0096a9799db1318ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889019071659371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b577536-5550-42ee-a3
61-275f78e67c9e,},Annotations:map[string]string{io.kubernetes.container.hash: c04b78af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b,PodSandboxId:2c997df02d85c7c3ea77dd5d65bfbce4239c3d764580a98abb4b77f938740703,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889018703257129,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-bjd2q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b
006335d-65ed-49c1-96b6-8d753f5fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfdace8,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835,PodSandboxId:5e0a9fa5c886d4ffea2554bf043312c53436542ed60b6a198b91c155199f002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628889014953516811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-jphw4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 057e9392-38dd-4c71-a09d-83ae9055347e,},Annotations:map[string]string{io.kubernetes.container.hash: 6857dfbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b,PodSandboxId:3f398ab3b89ee1711f5da7333f5b2b821dbc63f694a20adaacf744b6c1a58f20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a
0100b,State:CONTAINER_RUNNING,CreatedAt:1628889012788386408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jn56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf9beff3-8f15-4901-9886-ef5f0d821182,},Annotations:map[string]string{io.kubernetes.container.hash: 6afe36de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db,PodSandboxId:88359d0165f40f8ac136a1c9386e5da420f5a61c0db4413dcc684054e5db9d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,Cr
eatedAt:1628888989412635879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ad0e10cac8aeee380d26bbfbc000cf,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e,PodSandboxId:0c2ff6f02f0bfdb9eec25641e32bfb61d46e53717823512882d9daaa529ff156,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69
543b,State:CONTAINER_RUNNING,CreatedAt:1628888989026504196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e5e18bcabd6d0bbe78163ae4a98f94,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810,PodSandboxId:0af4bcb76df381b648f031c6a71634e8585a0356d4201432678bcbb6cd677c20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd8
71b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628888988932322073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ac62e159326a36fdc31b66bc9766a7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9d1cb4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da,PodSandboxId:d30efefc17fce6f4769b01d3bc43ba1b9d4e1f4ef1b87ca148374ec36b4ea79f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e8915
31b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628888988769788104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-different-port-20210813210102-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cca1ddb3b24aae5967dc09c1a83a0c1,},Annotations:map[string]string{io.kubernetes.container.hash: ed50a593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a9af410d-c93b-428f-94d0-71e1ce071277 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID
	78f9412bae3df       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   13 seconds ago      Exited              dashboard-metrics-scraper   1                   749b97f59c0c3
	da5b0f37de36c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   22 seconds ago      Running             storage-provisioner         0                   5df5c7fd58ec2
	b1f0605333fb5       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   22 seconds ago      Running             kubernetes-dashboard        0                   2c997df02d85c
	99f881d576aca       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   26 seconds ago      Running             coredns                     0                   5e0a9fa5c886d
	3848b04f93b16       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   28 seconds ago      Running             kube-proxy                  0                   3f398ab3b89ee
	cf57c2ca5ce6e       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   51 seconds ago      Running             kube-scheduler              0                   88359d0165f40
	426faaf2ad7c3       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   52 seconds ago      Running             kube-controller-manager     0                   0c2ff6f02f0bf
	9f61c3a7d63f2       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   52 seconds ago      Running             kube-apiserver              0                   0af4bcb76df38
	e990afb78f8b2       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   52 seconds ago      Running             etcd                        0                   d30efefc17fce
	
	* 
	* ==> coredns [99f881d576acae2d79f49496fe87995651c01c1d11035632f74fe263f7394835] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20210813210102-30853
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20210813210102-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=default-k8s-different-port-20210813210102-30853
	                    minikube.k8s.io/updated_at=2021_08_13T21_09_58_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 21:09:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20210813210102-30853
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 21:10:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 21:10:34 +0000   Fri, 13 Aug 2021 21:09:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 21:10:34 +0000   Fri, 13 Aug 2021 21:09:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 21:10:34 +0000   Fri, 13 Aug 2021 21:09:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 21:10:34 +0000   Fri, 13 Aug 2021 21:10:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.136
	  Hostname:    default-k8s-different-port-20210813210102-30853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 a49921801fb044088819eb98af731e4b
	  System UUID:                a4992180-1fb0-4408-8819-eb98af731e4b
	  Boot ID:                    b0749f82-7a44-496b-9b13-eea1ee12d9e8
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-jphw4                                                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     30s
	  kube-system                 etcd-default-k8s-different-port-20210813210102-30853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         44s
	  kube-system                 kube-apiserver-default-k8s-different-port-20210813210102-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20210813210102-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-proxy-jn56d                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-scheduler-default-k8s-different-port-20210813210102-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 metrics-server-7c784ccb57-cdhkk                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (14%!)(MISSING)      0 (0%!)(MISSING)         26s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-265ml                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-bjd2q                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             470Mi (22%!)(MISSING)  170Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  54s (x6 over 54s)  kubelet     Node default-k8s-different-port-20210813210102-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x6 over 54s)  kubelet     Node default-k8s-different-port-20210813210102-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x6 over 54s)  kubelet     Node default-k8s-different-port-20210813210102-30853 status is now: NodeHasSufficientPID
	  Normal  Starting                 37s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s                kubelet     Node default-k8s-different-port-20210813210102-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s                kubelet     Node default-k8s-different-port-20210813210102-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s                kubelet     Node default-k8s-different-port-20210813210102-30853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                31s                kubelet     Node default-k8s-different-port-20210813210102-30853 status is now: NodeReady
	  Normal  Starting                 28s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	*               on the kernel command line
	[  +0.000122] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.444808] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.040700] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.903859] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1725 comm=systemd-network
	[  +0.753003] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.310008] vboxguest: loading out-of-tree module taints kernel.
	[  +0.011903] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug13 21:04] systemd-fstab-generator[2136]: Ignoring "noauto" for root device
	[  +0.141171] systemd-fstab-generator[2149]: Ignoring "noauto" for root device
	[  +0.221343] systemd-fstab-generator[2175]: Ignoring "noauto" for root device
	[  +8.225926] systemd-fstab-generator[2364]: Ignoring "noauto" for root device
	[ +18.064269] kauditd_printk_skb: 38 callbacks suppressed
	[ +13.334152] kauditd_printk_skb: 89 callbacks suppressed
	[Aug13 21:05] kauditd_printk_skb: 2 callbacks suppressed
	[ +35.653969] NFSD: Unable to end grace period: -110
	[Aug13 21:09] kauditd_printk_skb: 14 callbacks suppressed
	[ +26.084804] systemd-fstab-generator[5969]: Ignoring "noauto" for root device
	[ +16.997735] systemd-fstab-generator[6368]: Ignoring "noauto" for root device
	[Aug13 21:10] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.062256] kauditd_printk_skb: 107 callbacks suppressed
	[  +9.334751] kauditd_printk_skb: 8 callbacks suppressed
	[  +8.720038] systemd-fstab-generator[7951]: Ignoring "noauto" for root device
	[  +0.895521] systemd-fstab-generator[8005]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [e990afb78f8b270d6c731ebce965a39955c423acd8f76414397024d75ae5b9da] <==
	* raft2021/08/13 21:09:49 INFO: 247e73b5d65300e1 is starting a new election at term 1
	raft2021/08/13 21:09:49 INFO: 247e73b5d65300e1 became candidate at term 2
	raft2021/08/13 21:09:49 INFO: 247e73b5d65300e1 received MsgVoteResp from 247e73b5d65300e1 at term 2
	raft2021/08/13 21:09:49 INFO: 247e73b5d65300e1 became leader at term 2
	raft2021/08/13 21:09:49 INFO: raft.node: 247e73b5d65300e1 elected leader 247e73b5d65300e1 at term 2
	2021-08-13 21:09:49.392234 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 21:09:49.395790 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 21:09:49.397137 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 21:09:49.397279 I | etcdserver: published {Name:default-k8s-different-port-20210813210102-30853 ClientURLs:[https://192.168.50.136:2379]} to cluster 736953c025287a25
	2021-08-13 21:09:49.397288 I | embed: ready to serve client requests
	2021-08-13 21:09:49.399410 I | embed: serving client requests on 192.168.50.136:2379
	2021-08-13 21:09:49.399554 I | embed: ready to serve client requests
	2021-08-13 21:09:49.429449 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 21:09:57.419650 W | etcdserver: read-only range request "key:\"/registry/events/default/default-k8s-different-port-20210813210102-30853.169af9e755d5fde7\" " with result "range_response_count:0 size:5" took too long (851.99415ms) to execute
	2021-08-13 21:09:57.421983 W | etcdserver: request "header:<ID:63467390364394589 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-ftchd\" mod_revision:0 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-ftchd\" value_size:904 >> failure:<>>" with result "size:16" took too long (784.275831ms) to execute
	2021-08-13 21:09:57.426171 W | etcdserver: read-only range request "key:\"/registry/minions/default-k8s-different-port-20210813210102-30853\" " with result "range_response_count:1 size:5198" took too long (583.849177ms) to execute
	2021-08-13 21:09:57.427427 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (510.678643ms) to execute
	2021-08-13 21:09:57.432930 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:0 size:5" took too long (677.386887ms) to execute
	2021-08-13 21:10:08.528598 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:10:11.529393 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:1 size:260" took too long (331.774961ms) to execute
	2021-08-13 21:10:11.530079 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:209" took too long (187.767839ms) to execute
	2021-08-13 21:10:12.798229 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:10:22.800622 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:10:25.671627 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:6431" took too long (121.296265ms) to execute
	2021-08-13 21:10:32.799654 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  21:10:41 up 7 min,  0 users,  load average: 2.90, 1.16, 0.51
	Linux default-k8s-different-port-20210813210102-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [9f61c3a7d63f233097bba743418bc8f35a885b3bb4a6a7178b5a3456960cc810] <==
	* Trace[1492505029]: [514.51431ms] [514.51431ms] END
	I0813 21:09:57.430453       1 trace.go:205] Trace[365018303]: "List" url:/api/v1/namespaces/kube-system/limitranges,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 21:09:56.915) (total time: 514ms):
	Trace[365018303]: ---"Listing from storage done" 514ms (21:09:00.430)
	Trace[365018303]: [514.615722ms] [514.615722ms] END
	I0813 21:09:57.436068       1 trace.go:205] Trace[1750305912]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/certificate-controller,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/kube-controller-manager,client:192.168.50.136,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 21:09:56.754) (total time: 681ms):
	Trace[1750305912]: [681.298646ms] [681.298646ms] END
	I0813 21:09:57.439016       1 trace.go:205] Trace[933891466]: "Get" url:/api/v1/nodes/default-k8s-different-port-20210813210102-30853,user-agent:kubeadm/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.50.136,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 21:09:56.841) (total time: 597ms):
	Trace[933891466]: ---"About to write a response" 589ms (21:09:00.430)
	Trace[933891466]: [597.313287ms] [597.313287ms] END
	I0813 21:09:57.440255       1 trace.go:205] Trace[2142592379]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.50.136,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 21:09:56.911) (total time: 528ms):
	Trace[2142592379]: ---"Object stored in database" 527ms (21:09:00.440)
	Trace[2142592379]: [528.920179ms] [528.920179ms] END
	I0813 21:09:57.474192       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 21:09:58.492767       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 21:09:58.561940       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 21:10:04.076723       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 21:10:11.021273       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 21:10:11.155456       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0813 21:10:18.191750       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 21:10:18.192089       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 21:10:18.192169       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 21:10:25.677823       1 client.go:360] parsed scheme: "passthrough"
	I0813 21:10:25.678194       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 21:10:25.678352       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [426faaf2ad7c312cc7fd31b786adf7db81a5b05aae0aa19b8c952ae5dcbc235e] <==
	* I0813 21:10:15.738740       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-cdhkk"
	I0813 21:10:16.359648       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 21:10:16.423354       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:16.464393       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:16.465024       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 21:10:16.499070       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:16.501091       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:10:16.521054       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:16.538815       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:10:16.539740       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:16.539765       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:16.573782       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:10:16.574443       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:16.574480       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:10:16.574495       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:16.617137       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:10:16.617478       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:16.617790       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:10:16.619183       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:16.663622       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:10:16.664310       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:16.664639       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:10:16.664771       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:10:16.692948       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-bjd2q"
	I0813 21:10:16.751656       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-265ml"
	
	* 
	* ==> kube-proxy [3848b04f93b168973cdbafccfc1c672420e446e5f7d64db41b371881f6822a0b] <==
	* I0813 21:10:13.350525       1 node.go:172] Successfully retrieved node IP: 192.168.50.136
	I0813 21:10:13.350699       1 server_others.go:140] Detected node IP 192.168.50.136
	W0813 21:10:13.350759       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 21:10:13.513329       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 21:10:13.513436       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 21:10:13.513489       1 server_others.go:212] Using iptables Proxier.
	I0813 21:10:13.514298       1 server.go:643] Version: v1.21.3
	I0813 21:10:13.516087       1 config.go:315] Starting service config controller
	I0813 21:10:13.516113       1 config.go:224] Starting endpoint slice config controller
	I0813 21:10:13.516114       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 21:10:13.516122       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 21:10:13.540156       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 21:10:13.545491       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 21:10:13.616711       1 shared_informer.go:247] Caches are synced for service config 
	I0813 21:10:13.630270       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [cf57c2ca5ce6e4a407a21e8bdebe35284d38f2f854a3bdf51602c2b3c59809db] <==
	* E0813 21:09:54.265641       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:09:54.279826       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:09:54.280598       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:09:54.280806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:09:54.281346       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:09:54.281544       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:09:54.281806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:09:54.283528       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:09:54.284973       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:09:54.285135       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:09:54.289619       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:09:54.290099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:09:54.290291       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:09:54.290420       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:09:55.217784       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:09:55.230962       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:09:55.325540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:09:55.326261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:09:55.361384       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:09:55.370306       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:09:55.536650       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:09:55.538248       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:09:55.596821       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:09:55.652988       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0813 21:09:57.262912       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:03:41 UTC, end at Fri 2021-08-13 21:10:41 UTC. --
	Aug 13 21:10:16 default-k8s-different-port-20210813210102-30853 kubelet[6377]: I0813 21:10:16.936598    6377 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a37a9d34-9307-42f7-b165-6aee4b9b2518-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-265ml\" (UID: \"a37a9d34-9307-42f7-b165-6aee4b9b2518\") "
	Aug 13 21:10:17 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:17.872673    6377 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:10:17 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:17.872715    6377 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:10:17 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:17.872950    6377 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-m4lnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe
{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vo
lumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-cdhkk_kube-system(899ed30f-faf1-40e3-9a46-c1ad31aa7f70): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:10:17 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:17.873001    6377 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-cdhkk" podUID=899ed30f-faf1-40e3-9a46-c1ad31aa7f70
	Aug 13 21:10:18 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:18.574192    6377 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-cdhkk" podUID=899ed30f-faf1-40e3-9a46-c1ad31aa7f70
	Aug 13 21:10:25 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:25.252622    6377 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/899ed30f-faf1-40e3-9a46-c1ad31aa7f70/etc-hosts with error exit status 1" pod="kube-system/metrics-server-7c784ccb57-cdhkk"
	Aug 13 21:10:25 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:25.315114    6377 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/a37a9d34-9307-42f7-b165-6aee4b9b2518/etc-hosts with error exit status 1" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-265ml"
	Aug 13 21:10:27 default-k8s-different-port-20210813210102-30853 kubelet[6377]: I0813 21:10:27.911933    6377 scope.go:111] "RemoveContainer" containerID="e0b89b63f5fe921ef6c5e04cea4e87cb0117264752ec780c66173b524339e17f"
	Aug 13 21:10:28 default-k8s-different-port-20210813210102-30853 kubelet[6377]: I0813 21:10:28.922753    6377 scope.go:111] "RemoveContainer" containerID="e0b89b63f5fe921ef6c5e04cea4e87cb0117264752ec780c66173b524339e17f"
	Aug 13 21:10:28 default-k8s-different-port-20210813210102-30853 kubelet[6377]: I0813 21:10:28.923048    6377 scope.go:111] "RemoveContainer" containerID="78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f"
	Aug 13 21:10:28 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:28.923306    6377 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-265ml_kubernetes-dashboard(a37a9d34-9307-42f7-b165-6aee4b9b2518)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-265ml" podUID=a37a9d34-9307-42f7-b165-6aee4b9b2518
	Aug 13 21:10:29 default-k8s-different-port-20210813210102-30853 kubelet[6377]: I0813 21:10:29.934482    6377 scope.go:111] "RemoveContainer" containerID="78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f"
	Aug 13 21:10:29 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:29.934814    6377 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-265ml_kubernetes-dashboard(a37a9d34-9307-42f7-b165-6aee4b9b2518)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-265ml" podUID=a37a9d34-9307-42f7-b165-6aee4b9b2518
	Aug 13 21:10:33 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:33.282772    6377 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:10:33 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:33.282808    6377 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:10:33 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:33.283028    6377 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-m4lnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe
{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vo
lumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-cdhkk_kube-system(899ed30f-faf1-40e3-9a46-c1ad31aa7f70): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:10:33 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:33.283069    6377 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-cdhkk" podUID=899ed30f-faf1-40e3-9a46-c1ad31aa7f70
	Aug 13 21:10:35 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:35.603318    6377 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/899ed30f-faf1-40e3-9a46-c1ad31aa7f70/etc-hosts with error exit status 1" pod="kube-system/metrics-server-7c784ccb57-cdhkk"
	Aug 13 21:10:36 default-k8s-different-port-20210813210102-30853 kubelet[6377]: I0813 21:10:36.773411    6377 scope.go:111] "RemoveContainer" containerID="78f9412bae3dff20c2072ad4eba58ac86ea8e535fb5530421566074f2ea4439f"
	Aug 13 21:10:36 default-k8s-different-port-20210813210102-30853 kubelet[6377]: E0813 21:10:36.776592    6377 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-265ml_kubernetes-dashboard(a37a9d34-9307-42f7-b165-6aee4b9b2518)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-265ml" podUID=a37a9d34-9307-42f7-b165-6aee4b9b2518
	Aug 13 21:10:37 default-k8s-different-port-20210813210102-30853 kubelet[6377]: I0813 21:10:37.004775    6377 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 21:10:37 default-k8s-different-port-20210813210102-30853 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:10:37 default-k8s-different-port-20210813210102-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:10:37 default-k8s-different-port-20210813210102-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [b1f0605333fb5fb5242f521f51b351a9344694288d138041ad7dabe9a1ae962b] <==
	* 2021/08/13 21:10:19 Using namespace: kubernetes-dashboard
	2021/08/13 21:10:19 Using in-cluster config to connect to apiserver
	2021/08/13 21:10:19 Using secret token for csrf signing
	2021/08/13 21:10:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:10:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:10:19 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 21:10:19 Generating JWE encryption key
	2021/08/13 21:10:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:10:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:10:19 Initializing JWE encryption key from synchronized object
	2021/08/13 21:10:19 Creating in-cluster Sidecar client
	2021/08/13 21:10:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:10:19 Serving insecurely on HTTP port: 9090
	2021/08/13 21:10:19 Starting overwatch
	
	* 
	* ==> storage-provisioner [da5b0f37de36c9bf9f701bdc435c520c01e049f49a45c1ddad8558d8496f7094] <==
	* I0813 21:10:19.288764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 21:10:19.360044       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 21:10:19.364685       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 21:10:19.392709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 21:10:19.393488       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813210102-30853_5ce06dd5-f19b-4b23-af54-735315d3c3bf!
	I0813 21:10:19.405519       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2da62b6a-0ff9-4a92-b2ac-90266f4c9f83", APIVersion:"v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210813210102-30853_5ce06dd5-f19b-4b23-af54-735315d3c3bf became leader
	I0813 21:10:19.496537       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813210102-30853_5ce06dd5-f19b-4b23-af54-735315d3c3bf!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813210102-30853 -n default-k8s-different-port-20210813210102-30853
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813210102-30853 -n default-k8s-different-port-20210813210102-30853: exit status 2 (288.759003ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20210813210102-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-cdhkk
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20210813210102-30853 describe pod metrics-server-7c784ccb57-cdhkk
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20210813210102-30853 describe pod metrics-server-7c784ccb57-cdhkk: exit status 1 (78.931076ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-cdhkk" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20210813210102-30853 describe pod metrics-server-7c784ccb57-cdhkk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20210813205915-30853 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-20210813205915-30853 --alsologtostderr -v=1: exit status 80 (2.565720121s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-20210813205915-30853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 21:11:14.714245   13923 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:11:14.714354   13923 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:11:14.714370   13923 out.go:311] Setting ErrFile to fd 2...
	I0813 21:11:14.714374   13923 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:11:14.714465   13923 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:11:14.714623   13923 out.go:305] Setting JSON to false
	I0813 21:11:14.714639   13923 mustload.go:65] Loading cluster: no-preload-20210813205915-30853
	I0813 21:11:14.714923   13923 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:11:14.715309   13923 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:11:14.715351   13923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:14.726127   13923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36691
	I0813 21:11:14.726564   13923 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:14.727128   13923 main.go:130] libmachine: Using API Version  1
	I0813 21:11:14.727150   13923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:14.727527   13923 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:14.727687   13923 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:11:14.730821   13923 host.go:66] Checking if "no-preload-20210813205915-30853" exists ...
	I0813 21:11:14.731174   13923 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:11:14.731211   13923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:14.741396   13923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36853
	I0813 21:11:14.741767   13923 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:14.742214   13923 main.go:130] libmachine: Using API Version  1
	I0813 21:11:14.742234   13923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:14.742525   13923 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:14.742684   13923 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:11:14.743330   13923 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-20210813205915-30853 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 21:11:14.745948   13923 out.go:177] * Pausing node no-preload-20210813205915-30853 ... 
	I0813 21:11:14.745969   13923 host.go:66] Checking if "no-preload-20210813205915-30853" exists ...
	I0813 21:11:14.746267   13923 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:11:14.746303   13923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:14.756874   13923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0813 21:11:14.757260   13923 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:14.757673   13923 main.go:130] libmachine: Using API Version  1
	I0813 21:11:14.757692   13923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:14.758038   13923 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:14.758215   13923 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:11:14.758407   13923 ssh_runner.go:149] Run: systemctl --version
	I0813 21:11:14.758433   13923 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:11:14.763545   13923 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:11:14.763852   13923 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:11:14.763882   13923 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:11:14.763938   13923 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:11:14.764127   13923 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:11:14.764274   13923 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:11:14.764402   13923 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:11:14.861866   13923 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:11:14.872057   13923 pause.go:50] kubelet running: true
	I0813 21:11:14.872106   13923 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:11:15.154472   13923 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:11:15.154598   13923 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:11:15.296256   13923 cri.go:76] found id: "efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf"
	I0813 21:11:15.296282   13923 cri.go:76] found id: "25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c"
	I0813 21:11:15.296287   13923 cri.go:76] found id: "7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9"
	I0813 21:11:15.296293   13923 cri.go:76] found id: "ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3"
	I0813 21:11:15.296298   13923 cri.go:76] found id: "9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357"
	I0813 21:11:15.296304   13923 cri.go:76] found id: "34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0"
	I0813 21:11:15.296309   13923 cri.go:76] found id: "94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b"
	I0813 21:11:15.296314   13923 cri.go:76] found id: "fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455"
	I0813 21:11:15.296324   13923 cri.go:76] found id: "b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6"
	I0813 21:11:15.296336   13923 cri.go:76] found id: "2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0"
	I0813 21:11:15.296343   13923 cri.go:76] found id: ""
	I0813 21:11:15.296387   13923 ssh_runner.go:149] Run: sudo runc list -f json

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p no-preload-20210813205915-30853 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813205915-30853 -n no-preload-20210813205915-30853
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813205915-30853 -n no-preload-20210813205915-30853: exit status 2 (260.413066ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20210813205915-30853 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p no-preload-20210813205915-30853 logs -n 25: (1.194830849s)
helpers_test.go:253: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:26 UTC | Fri, 13 Aug 2021 21:03:27 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:27 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:28 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:32 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:23 UTC | Fri, 13 Aug 2021 21:08:32 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:08:42 UTC | Fri, 13 Aug 2021 21:08:43 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:00 UTC | Fri, 13 Aug 2021 21:08:52 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                         |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:02 UTC | Fri, 13 Aug 2021 21:09:02 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205823-30853                       | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:05 UTC | Fri, 13 Aug 2021 21:09:06 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205823-30853                       | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:07 UTC | Fri, 13 Aug 2021 21:09:09 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:09 UTC | Fri, 13 Aug 2021 21:09:10 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:09:10 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:09:11 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:11 UTC | Fri, 13 Aug 2021 21:09:11 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:10:25 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:36 UTC | Fri, 13 Aug 2021 21:10:36 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813210910-30853 --memory=2200           | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:10:38 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:38 UTC | Fri, 13 Aug 2021 21:10:39 UTC |
	|         | newest-cni-20210813210910-30853                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813210102-30853            | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:38 UTC | Fri, 13 Aug 2021 21:10:39 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813210102-30853            | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:40 UTC | Fri, 13 Aug 2021 21:10:41 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:42 UTC | Fri, 13 Aug 2021 21:10:43 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:43 UTC | Fri, 13 Aug 2021 21:10:43 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:32 UTC | Fri, 13 Aug 2021 21:10:58 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:14 UTC | Fri, 13 Aug 2021 21:11:14 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:09:10
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:09:10.673379   12791 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:09:10.673452   12791 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:09:10.673457   12791 out.go:311] Setting ErrFile to fd 2...
	I0813 21:09:10.673460   12791 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:09:10.673589   12791 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:09:10.673842   12791 out.go:305] Setting JSON to false
	I0813 21:09:10.710967   12791 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":10313,"bootTime":1628878638,"procs":196,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:09:10.711108   12791 start.go:121] virtualization: kvm guest
	I0813 21:09:10.714392   12791 out.go:177] * [newest-cni-20210813210910-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:09:10.716013   12791 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:09:10.714549   12791 notify.go:169] Checking for updates...
	I0813 21:09:10.717634   12791 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:09:10.719077   12791 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:09:10.720797   12791 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:09:10.721401   12791 config.go:177] Loaded profile config "default-k8s-different-port-20210813210102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:09:10.721555   12791 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:09:10.721780   12791 config.go:177] Loaded profile config "old-k8s-version-20210813205823-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 21:09:10.721849   12791 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:09:10.756752   12791 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 21:09:10.756780   12791 start.go:278] selected driver: kvm2
	I0813 21:09:10.756787   12791 start.go:751] validating driver "kvm2" against <nil>
	I0813 21:09:10.756803   12791 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:09:10.758053   12791 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:09:10.758234   12791 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:09:10.769742   12791 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:09:10.769793   12791 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	W0813 21:09:10.769818   12791 out.go:242] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0813 21:09:10.769965   12791 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 21:09:10.769992   12791 cni.go:93] Creating CNI manager for ""
	I0813 21:09:10.769999   12791 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:09:10.770006   12791 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 21:09:10.770016   12791 start_flags.go:277] config:
	{Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:09:10.770113   12791 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:09:10.772194   12791 out.go:177] * Starting control plane node newest-cni-20210813210910-30853 in cluster newest-cni-20210813210910-30853
	I0813 21:09:10.772225   12791 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:09:10.772278   12791 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 21:09:10.772313   12791 cache.go:56] Caching tarball of preloaded images
	I0813 21:09:10.772443   12791 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 21:09:10.772466   12791 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0813 21:09:10.772616   12791 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json ...
	I0813 21:09:10.772647   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json: {Name:mka76415e48e0242b5a1559d0d7199fac2bfb5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:10.772840   12791 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:09:10.772878   12791 start.go:313] acquiring machines lock for newest-cni-20210813210910-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:09:10.772950   12791 start.go:317] acquired machines lock for "newest-cni-20210813210910-30853" in 46.661µs
	I0813 21:09:10.772977   12791 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVers
ion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:09:10.773061   12791 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 21:09:07.914518   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:08.406958   11447 pod_ready.go:81] duration metric: took 4m0.40016385s waiting for pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace to be "Ready" ...
	E0813 21:09:08.406984   11447 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:09:08.407011   11447 pod_ready.go:38] duration metric: took 4m38.843620331s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:09:08.407047   11447 kubeadm.go:604] restartCluster took 5m2.813329014s
	W0813 21:09:08.407209   11447 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:09:08.407246   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:09:07.902231   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.401905   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.775162   12791 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 21:09:10.775296   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:09:10.775358   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:09:10.786479   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0813 21:09:10.786930   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:09:10.787562   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:09:10.787587   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:09:10.788015   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:09:10.788228   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:10.788398   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:10.788591   12791 start.go:160] libmachine.API.Create for "newest-cni-20210813210910-30853" (driver="kvm2")
	I0813 21:09:10.788640   12791 client.go:168] LocalClient.Create starting
	I0813 21:09:10.788684   12791 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 21:09:10.788746   12791 main.go:130] libmachine: Decoding PEM data...
	I0813 21:09:10.788770   12791 main.go:130] libmachine: Parsing certificate...
	I0813 21:09:10.788912   12791 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 21:09:10.788937   12791 main.go:130] libmachine: Decoding PEM data...
	I0813 21:09:10.788956   12791 main.go:130] libmachine: Parsing certificate...
	I0813 21:09:10.789012   12791 main.go:130] libmachine: Running pre-create checks...
	I0813 21:09:10.789029   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .PreCreateCheck
	I0813 21:09:10.789351   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetConfigRaw
	I0813 21:09:10.789790   12791 main.go:130] libmachine: Creating machine...
	I0813 21:09:10.789804   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Create
	I0813 21:09:10.789932   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Creating KVM machine...
	I0813 21:09:10.792752   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found existing default KVM network
	I0813 21:09:10.794412   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:10.794251   12815 network.go:288] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc000010800] misses:0}
	I0813 21:09:10.794453   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:10.794342   12815 network.go:235] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 21:09:10.817502   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | trying to create private KVM network mk-newest-cni-20210813210910-30853 192.168.39.0/24...
	I0813 21:09:11.103452   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | private KVM network mk-newest-cni-20210813210910-30853 192.168.39.0/24 created
	I0813 21:09:11.103485   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.103368   12815 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:09:11.103509   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853 ...
	I0813 21:09:11.103562   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 21:09:11.103608   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 21:09:11.320966   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.320858   12815 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa...
	I0813 21:09:11.459093   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.458976   12815 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/newest-cni-20210813210910-30853.rawdisk...
	I0813 21:09:11.459148   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Writing magic tar header
	I0813 21:09:11.459177   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Writing SSH key tar header
	I0813 21:09:11.459194   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.459075   12815 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853 ...
	I0813 21:09:11.459223   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853 (perms=drwx------)
	I0813 21:09:11.459288   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853
	I0813 21:09:11.459321   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 21:09:11.459350   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 21:09:11.459373   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 21:09:11.459391   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 21:09:11.459409   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:09:11.459426   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337
	I0813 21:09:11.459444   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 21:09:11.459464   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins
	I0813 21:09:11.459485   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 21:09:11.459500   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home
	I0813 21:09:11.459515   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 21:09:11.459528   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Skipping /home - not owner
	I0813 21:09:11.459546   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Creating domain...
	I0813 21:09:11.488427   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:ee:fb:7e in network default
	I0813 21:09:11.489099   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring networks are active...
	I0813 21:09:11.489140   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:11.491476   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring network default is active
	I0813 21:09:11.491829   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring network mk-newest-cni-20210813210910-30853 is active
	I0813 21:09:11.492457   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Getting domain xml...
	I0813 21:09:11.494775   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Creating domain...
	I0813 21:09:11.955786   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Waiting to get IP...
	I0813 21:09:11.956670   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:11.957315   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:11.957341   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.957262   12815 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 21:09:12.221730   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.222307   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.222349   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:12.222212   12815 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 21:09:12.604662   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.605164   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.605191   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:12.605108   12815 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 21:09:13.029701   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.030156   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.030218   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:13.030122   12815 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 21:09:13.504659   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.505143   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.505173   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:13.505105   12815 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 21:09:14.093824   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.094412   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.094446   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:14.094345   12815 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 21:09:14.929917   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.930509   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.930535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:14.930469   12815 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 21:09:12.902877   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:14.903637   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:15.678952   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:15.679492   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:15.679571   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:15.679462   12815 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 21:09:16.668007   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:16.668572   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:16.668609   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:16.668495   12815 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 21:09:17.859819   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:17.860363   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:17.860390   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:17.860285   12815 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 21:09:19.539855   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:19.540503   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:19.540530   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:19.540442   12815 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 21:09:17.403580   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:19.901370   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:21.902145   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:21.887601   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:21.888130   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:21.888151   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:21.888074   12815 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 21:09:25.255905   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.256490   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Found IP for machine: 192.168.39.210
	I0813 21:09:25.256524   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has current primary IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.256535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Reserving static IP address...
	I0813 21:09:25.256915   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find host DHCP lease matching {name: "newest-cni-20210813210910-30853", mac: "52:54:00:22:60:9f", ip: "192.168.39.210"} in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.303282   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Reserved static IP address: 192.168.39.210
	I0813 21:09:25.303341   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Getting to WaitForSSH function...
	I0813 21:09:25.303352   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Waiting for SSH to be available...
	I0813 21:09:25.309055   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.309442   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:minikube Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.309474   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.309627   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using SSH client type: external
	I0813 21:09:25.309651   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa (-rw-------)
	I0813 21:09:25.309698   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:09:25.309731   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | About to run SSH command:
	I0813 21:09:25.309744   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | exit 0
	I0813 21:09:25.467104   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 21:09:25.467603   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) KVM machine creation complete!
	I0813 21:09:25.467679   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetConfigRaw
	I0813 21:09:25.468310   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:25.468513   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:25.468691   12791 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 21:09:25.468710   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:09:25.471536   12791 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 21:09:25.471555   12791 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 21:09:25.471565   12791 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 21:09:25.471575   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.476123   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.476450   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.476479   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.476604   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.476755   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.476933   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.477105   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.477284   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.477466   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.477480   12791 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 21:09:25.594161   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:09:25.594190   12791 main.go:130] libmachine: Detecting the provisioner...
	I0813 21:09:25.594203   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.600130   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.600531   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.600564   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.600765   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.600974   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.601151   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.601303   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.601456   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.601620   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.601635   12791 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 21:09:22.392237   11600 pod_ready.go:81] duration metric: took 4m0.007094721s waiting for pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace to be "Ready" ...
	E0813 21:09:22.392261   11600 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:09:22.392283   11600 pod_ready.go:38] duration metric: took 4m14.135839126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:09:22.392312   11600 kubeadm.go:604] restartCluster took 4m52.280117973s
	W0813 21:09:22.392448   11600 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:09:22.392485   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:09:25.715874   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 21:09:25.715991   12791 main.go:130] libmachine: found compatible host: buildroot
	I0813 21:09:25.716007   12791 main.go:130] libmachine: Provisioning with buildroot...
	I0813 21:09:25.716023   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:25.716285   12791 buildroot.go:166] provisioning hostname "newest-cni-20210813210910-30853"
	I0813 21:09:25.716311   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:25.716475   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.722141   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.722535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.722575   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.722814   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.723002   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.723169   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.723323   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.723458   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.723611   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.723626   12791 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210813210910-30853 && echo "newest-cni-20210813210910-30853" | sudo tee /etc/hostname
	I0813 21:09:25.855120   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210813210910-30853
	
	I0813 21:09:25.855151   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.861182   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.861544   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.861567   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.861715   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.861922   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.862087   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.862214   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.862344   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.862548   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.862577   12791 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210813210910-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210813210910-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210813210910-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:09:25.982023   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:09:25.982082   12791 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:09:25.982118   12791 buildroot.go:174] setting up certificates
	I0813 21:09:25.982134   12791 provision.go:83] configureAuth start
	I0813 21:09:25.982150   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:25.982399   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:25.988009   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.988348   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.988380   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.988535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.993579   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.993994   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.994024   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.994249   12791 provision.go:138] copyHostCerts
	I0813 21:09:25.994336   12791 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:09:25.994347   12791 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:09:25.994396   12791 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:09:25.994483   12791 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:09:25.994497   12791 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:09:25.994532   12791 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:09:25.994643   12791 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:09:25.994656   12791 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:09:25.994688   12791 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 21:09:25.994760   12791 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210813210910-30853 san=[192.168.39.210 192.168.39.210 localhost 127.0.0.1 minikube newest-cni-20210813210910-30853]
	I0813 21:09:26.305745   12791 provision.go:172] copyRemoteCerts
	I0813 21:09:26.305810   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:09:26.305840   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:26.311502   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.311880   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:26.311916   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.312018   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:26.312266   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:26.312474   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:26.312635   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:26.397917   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:09:26.415261   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 21:09:26.432018   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:09:26.448392   12791 provision.go:86] duration metric: configureAuth took 466.244488ms
	I0813 21:09:26.448413   12791 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:09:26.448550   12791 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:09:26.448647   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:26.453886   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.454235   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:26.454267   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.454404   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:26.454578   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:26.454719   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:26.454882   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:26.455020   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:26.455171   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:26.455193   12791 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 21:09:27.218253   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 21:09:27.218291   12791 main.go:130] libmachine: Checking connection to Docker...
	I0813 21:09:27.218304   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetURL
	I0813 21:09:27.220942   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using libvirt version 3000000
	I0813 21:09:27.225565   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.225908   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.225955   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.226230   12791 main.go:130] libmachine: Docker is up and running!
	I0813 21:09:27.226255   12791 main.go:130] libmachine: Reticulating splines...
	I0813 21:09:27.226262   12791 client.go:171] LocalClient.Create took 16.437611332s
	I0813 21:09:27.226308   12791 start.go:168] duration metric: libmachine.API.Create for "newest-cni-20210813210910-30853" took 16.437720973s
	I0813 21:09:27.226319   12791 start.go:267] post-start starting for "newest-cni-20210813210910-30853" (driver="kvm2")
	I0813 21:09:27.226323   12791 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:09:27.226339   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.226579   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:09:27.226605   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:27.231167   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.231514   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.231541   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.231723   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:27.231888   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:27.232115   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:27.232258   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:27.318810   12791 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:09:27.324679   12791 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:09:27.324708   12791 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:09:27.324766   12791 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:09:27.324867   12791 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 21:09:27.324993   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:09:27.332665   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:09:27.349495   12791 start.go:270] post-start completed in 123.164223ms
	I0813 21:09:27.349583   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetConfigRaw
	I0813 21:09:27.350235   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:27.356173   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.356503   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.356569   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.356804   12791 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json ...
	I0813 21:09:27.357034   12791 start.go:129] duration metric: createHost completed in 16.583958717s
	I0813 21:09:27.357054   12791 start.go:80] releasing machines lock for "newest-cni-20210813210910-30853", held for 16.584089955s
	I0813 21:09:27.357097   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.357282   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:27.361779   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.362087   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.362122   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.362275   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.362445   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.362924   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.363133   12791 ssh_runner.go:149] Run: systemctl --version
	I0813 21:09:27.363160   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:27.363219   12791 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:09:27.363264   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:27.368253   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.368519   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.368556   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.368628   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:27.368784   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:27.368919   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:27.369055   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:27.369149   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.369521   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.369556   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.369717   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:27.369863   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:27.369979   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:27.370099   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:27.452425   12791 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:09:27.452543   12791 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:09:31.448706   12791 ssh_runner.go:189] Completed: sudo crictl images --output json: (3.996135455s)
	I0813 21:09:31.448838   12791 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 21:09:31.448901   12791 ssh_runner.go:149] Run: which lz4
	I0813 21:09:31.453326   12791 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:09:31.458022   12791 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:09:31.458058   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (590981257 bytes)
	I0813 21:09:34.040840   12791 crio.go:362] Took 2.587545 seconds to copy over tarball
	I0813 21:09:34.040960   12791 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:09:39.662568   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.255292287s)
	I0813 21:09:39.662654   11447 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:09:39.679831   11447 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:09:39.679928   11447 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:09:39.725756   11447 cri.go:76] found id: ""
	I0813 21:09:39.725838   11447 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:09:39.734367   11447 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:09:39.743419   11447 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:09:39.743465   11447 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:09:39.046178   12791 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.005181631s)
	I0813 21:09:39.046212   12791 crio.go:369] Took 5.005343 seconds t extract the tarball
	I0813 21:09:39.046225   12791 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:09:39.096327   12791 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 21:09:39.108664   12791 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 21:09:39.120896   12791 docker.go:153] disabling docker service ...
	I0813 21:09:39.120956   12791 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:09:39.132781   12791 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:09:39.144772   12791 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:09:39.291366   12791 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:09:39.473805   12791 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:09:39.488990   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:09:39.508851   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 21:09:39.519787   12791 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:09:39.527766   12791 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:09:39.527827   12791 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:09:39.549292   12791 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:09:39.557653   12791 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:09:39.695889   12791 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 21:09:39.852538   12791 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 21:09:39.852673   12791 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 21:09:39.865143   12791 start.go:413] Will wait 60s for crictl version
	I0813 21:09:39.865219   12791 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:09:39.902891   12791 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 21:09:39.902976   12791 ssh_runner.go:149] Run: crio --version
	I0813 21:09:40.146285   12791 ssh_runner.go:149] Run: crio --version
	I0813 21:09:44.881949   11447 out.go:204]   - Generating certificates and keys ...
	I0813 21:09:44.881970   12791 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.2 ...
	I0813 21:09:44.882025   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:44.888023   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:44.888330   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:44.888361   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:44.888544   12791 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 21:09:44.893252   12791 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:09:44.903812   12791 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/client.crt
	I0813 21:09:44.903997   12791 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/client.key
	I0813 21:09:44.922443   12791 out.go:177]   - kubelet.network-plugin=cni
	I0813 21:09:44.923908   12791 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0813 21:09:44.923979   12791 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:09:44.924054   12791 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:09:45.004762   12791 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:09:45.004791   12791 crio.go:333] Images already preloaded, skipping extraction
	I0813 21:09:45.004856   12791 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:09:45.042121   12791 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:09:45.042150   12791 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:09:45.042226   12791 ssh_runner.go:149] Run: crio config
	I0813 21:09:45.253009   12791 cni.go:93] Creating CNI manager for ""
	I0813 21:09:45.253045   12791 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:09:45.253059   12791 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0813 21:09:45.253078   12791 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.39.210 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210813210910-30853 NodeName:newest-cni-20210813210910-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-el
ect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.210 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:09:45.253242   12791 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "newest-cni-20210813210910-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:09:45.253382   12791 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210813210910-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.210 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:09:45.253451   12791 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 21:09:45.260928   12791 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:09:45.260983   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:09:45.268144   12791 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (554 bytes)
	I0813 21:09:45.280833   12791 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 21:09:45.293352   12791 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I0813 21:09:45.306281   12791 ssh_runner.go:149] Run: grep 192.168.39.210	control-plane.minikube.internal$ /etc/hosts
	I0813 21:09:45.310235   12791 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:09:45.322126   12791 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853 for IP: 192.168.39.210
	I0813 21:09:45.322191   12791 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:09:45.322212   12791 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:09:45.322281   12791 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/client.key
	I0813 21:09:45.322307   12791 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a
	I0813 21:09:45.322319   12791 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a with IP's: [192.168.39.210 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 21:09:45.521630   12791 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a ...
	I0813 21:09:45.521662   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a: {Name:mk4aa4db18dba264c364eea6455fafca6541c687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.521857   12791 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a ...
	I0813 21:09:45.521869   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a: {Name:mk4bafabda5b550064b81d0be7e6d613e7cbe853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.521953   12791 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt
	I0813 21:09:45.522012   12791 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key
	I0813 21:09:45.522063   12791 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key
	I0813 21:09:45.522071   12791 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt with IP's: []
	I0813 21:09:45.572044   12791 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt ...
	I0813 21:09:45.572072   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt: {Name:mk46480092ca0ddfdbb22ced231c8543e6fadff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.572258   12791 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key ...
	I0813 21:09:45.572270   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key: {Name:mk2ff838c1ce904cf05995003085f2c953d17b54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.572443   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 21:09:45.572486   12791 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 21:09:45.572497   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:09:45.572520   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:09:45.572550   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:09:45.572575   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 21:09:45.572620   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:09:45.573530   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:09:45.591406   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:09:45.607675   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:09:45.623382   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 21:09:44.885025   11447 out.go:204]   - Booting up control plane ...
	I0813 21:09:45.638600   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:09:45.655496   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 21:09:45.672748   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:09:45.690934   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 21:09:45.709394   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:09:45.727886   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 21:09:45.747118   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 21:09:45.764623   12791 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:09:45.776487   12791 ssh_runner.go:149] Run: openssl version
	I0813 21:09:45.782506   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:09:45.790602   12791 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:09:45.795798   12791 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:09:45.795845   12791 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:09:45.801633   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:09:45.809459   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 21:09:45.817086   12791 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 21:09:45.821525   12791 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 21:09:45.821581   12791 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 21:09:45.827427   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 21:09:45.835137   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 21:09:45.843222   12791 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 21:09:45.848030   12791 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 21:09:45.848070   12791 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 21:09:45.854871   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:09:45.863382   12791 kubeadm.go:390] StartCluster: {Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.
0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:09:45.863483   12791 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 21:09:45.863550   12791 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:09:45.897179   12791 cri.go:76] found id: ""
	I0813 21:09:45.897265   12791 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:09:45.904791   12791 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:09:45.911599   12791 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:09:45.918334   12791 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:09:45.918383   12791 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:09:57.982116   11447 out.go:204]   - Configuring RBAC rules ...
	I0813 21:09:58.584325   11447 cni.go:93] Creating CNI manager for ""
	I0813 21:09:58.584349   11447 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:10:00.460094   12791 out.go:204]   - Generating certificates and keys ...
	I0813 21:09:58.586084   11447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:09:58.586145   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:09:58.603522   11447 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:09:58.627002   11447 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:09:58.627101   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:09:58.627103   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=default-k8s-different-port-20210813210102-30853 minikube.k8s.io/updated_at=2021_08_13T21_09_58_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:09:59.050930   11447 ops.go:34] apiserver oom_adj: -16
	I0813 21:09:59.051059   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:09:59.695711   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:00.195937   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:00.695450   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:03.003248   12791 out.go:204]   - Booting up control plane ...
	I0813 21:10:01.195565   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:01.695971   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:02.195512   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:02.696069   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:03.195960   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:03.696007   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:04.195636   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:04.695628   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:05.195701   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:05.695999   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.044352   11600 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (46.651842681s)
	I0813 21:10:09.044429   11600 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:10:09.059478   11600 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:10:09.059553   11600 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:10:09.093284   11600 cri.go:76] found id: ""
	I0813 21:10:09.093381   11600 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:10:09.100568   11600 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:10:09.107226   11600 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:10:09.107269   11600 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:10:06.195800   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:06.695240   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:07.195746   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:07.695213   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:08.195912   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:08.695965   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.195595   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.696049   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:10.195131   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:10.695293   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.730908   11600 out.go:204]   - Generating certificates and keys ...
	I0813 21:10:11.196059   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:11.534135   11447 kubeadm.go:985] duration metric: took 12.907094032s to wait for elevateKubeSystemPrivileges.
	I0813 21:10:11.534170   11447 kubeadm.go:392] StartCluster complete in 6m5.98958255s
	I0813 21:10:11.534191   11447 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:11.534316   11447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:10:11.535601   11447 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:12.110091   11447 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210813210102-30853" rescaled to 1
	I0813 21:10:12.110179   11447 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.136 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:10:12.112084   11447 out.go:177] * Verifying Kubernetes components...
	I0813 21:10:12.110253   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:10:12.112158   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:12.110569   11447 config.go:177] Loaded profile config "default-k8s-different-port-20210813210102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:10:12.110623   11447 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:10:12.112334   11447 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112337   11447 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112351   11447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112358   11447 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.112366   11447 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:10:12.112400   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.112736   11447 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112752   11447 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.112760   11447 addons.go:147] addon metrics-server should already be in state true
	I0813 21:10:12.112763   11447 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112774   11447 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.112783   11447 addons.go:147] addon dashboard should already be in state true
	I0813 21:10:12.112784   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.112802   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.112857   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.112894   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.112750   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.113192   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.113201   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.113224   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.113233   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.113340   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.140644   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41549
	I0813 21:10:12.140642   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35329
	I0813 21:10:12.140661   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0813 21:10:12.141348   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.141465   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.141541   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.141935   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.141953   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.142074   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.142081   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.142089   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.142093   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.142438   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.142486   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.143136   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.143176   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.143388   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.143929   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.143972   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.144251   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.144301   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0813 21:10:12.144729   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.145337   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.145357   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.145698   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.146348   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.146380   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.161135   11447 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.161159   11447 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:10:12.161188   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.161594   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.161636   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.161853   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34203
	I0813 21:10:12.161878   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43143
	I0813 21:10:12.162218   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.162412   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.162720   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.162740   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.162900   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.162921   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.163146   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.163294   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.166669   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.169181   11447 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:10:12.169252   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:10:12.169267   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:10:12.167214   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.169288   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.169571   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.173910   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.175978   11447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:10:12.176070   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0813 21:10:12.176093   11447 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:12.176103   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:10:12.176120   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.175639   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.176186   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.176216   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.175916   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39465
	I0813 21:10:12.176232   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:12.176420   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.176469   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.176549   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.176672   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.176869   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.177027   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.177041   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.177293   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.177308   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.177366   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.177663   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.177782   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.178349   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.178391   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.181885   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.183919   11447 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:10:12.182804   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.183976   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.184012   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.183416   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:10.812498   11600 out.go:204]   - Booting up control plane ...
	I0813 21:10:12.186349   11447 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:10:12.186413   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:10:12.184193   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.186427   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:10:12.186446   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.186621   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.186808   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.190702   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35615
	I0813 21:10:12.191063   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.191556   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.191584   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.191977   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.192165   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.192357   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.192757   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.192786   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.192929   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:12.193084   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.193242   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.193363   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.195129   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.195341   11447 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:12.195358   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:10:12.195378   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.200908   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.201282   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.201309   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.201443   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:12.201571   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.201711   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.201825   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.425248   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:12.468978   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:10:12.469021   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:10:12.494701   11447 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210813210102-30853" to be "Ready" ...
	I0813 21:10:12.495206   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:10:12.499329   11447 node_ready.go:49] node "default-k8s-different-port-20210813210102-30853" has status "Ready":"True"
	I0813 21:10:12.499359   11447 node_ready.go:38] duration metric: took 4.621451ms waiting for node "default-k8s-different-port-20210813210102-30853" to be "Ready" ...
	I0813 21:10:12.499373   11447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:12.499757   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:12.510602   11447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:12.610525   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:10:12.610562   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:10:12.656245   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:10:12.656276   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:10:12.772157   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:10:12.772191   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:10:12.815178   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:10:12.815208   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:10:12.932243   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:10:12.932272   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:10:12.992201   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:10:13.151328   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:10:13.151358   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:10:13.272742   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:10:13.272771   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:10:13.504799   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:10:13.504829   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:10:13.711447   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:10:13.711476   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:10:13.833690   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:10:13.833722   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:10:13.907807   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:10:13.907839   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:10:14.189833   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:10:14.535190   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:15.411080   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.985781369s)
	I0813 21:10:15.411145   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.411139   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.91134851s)
	I0813 21:10:15.411163   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.411180   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.411211   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.411243   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.916004514s)
	I0813 21:10:15.411301   11447 start.go:728] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS
	I0813 21:10:15.412648   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.412658   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.412711   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.412721   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.412731   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.412738   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.412765   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.412779   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.412797   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.412740   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.413131   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.413156   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.413170   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.413203   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.413207   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.413222   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.413245   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.413261   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.413535   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.413550   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:16.138255   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.145991542s)
	I0813 21:10:16.138325   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:16.138339   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:16.138639   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:16.138660   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:16.138663   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:16.138692   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:16.138702   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:16.138996   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:16.139040   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:16.139056   11447 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:16.138998   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:16.609336   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:17.060932   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.871038717s)
	I0813 21:10:17.061005   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:17.061023   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:17.061327   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:17.061348   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:17.061358   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:17.061349   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:17.061370   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:17.061708   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:17.061715   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:17.061777   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:17.064437   11447 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 21:10:17.064471   11447 addons.go:344] enableAddons completed in 4.953854482s
	I0813 21:10:19.033855   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:21.685414   12791 out.go:204]   - Configuring RBAC rules ...
	I0813 21:10:22.697730   12791 cni.go:93] Creating CNI manager for ""
	I0813 21:10:22.697758   12791 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:10:22.699669   12791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:10:22.699748   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:10:22.711081   12791 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:10:22.740715   12791 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:10:22.740845   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:22.740928   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=newest-cni-20210813210910-30853 minikube.k8s.io/updated_at=2021_08_13T21_10_22_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:23.063141   12791 ops.go:34] apiserver oom_adj: -16
	I0813 21:10:23.063228   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:23.680146   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:24.179617   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:24.680324   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:25.180108   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:21.530978   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:22.032299   11447 pod_ready.go:92] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:22.032329   11447 pod_ready.go:81] duration metric: took 9.521694058s waiting for pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:22.032343   11447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.052078   11447 pod_ready.go:102] pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:24.548192   11447 pod_ready.go:97] error getting pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-xmqhp" not found
	I0813 21:10:24.548233   11447 pod_ready.go:81] duration metric: took 2.515881289s waiting for pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace to be "Ready" ...
	E0813 21:10:24.548247   11447 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-xmqhp" not found
	I0813 21:10:24.548257   11447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.554129   11447 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.554154   11447 pod_ready.go:81] duration metric: took 5.887843ms waiting for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.554167   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.559840   11447 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.559859   11447 pod_ready.go:81] duration metric: took 5.68331ms waiting for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.559871   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.565198   11447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.565217   11447 pod_ready.go:81] duration metric: took 5.336694ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.565226   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jn56d" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.571811   11447 pod_ready.go:92] pod "kube-proxy-jn56d" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.571827   11447 pod_ready.go:81] duration metric: took 6.594619ms waiting for pod "kube-proxy-jn56d" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.571837   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.749142   11447 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.749167   11447 pod_ready.go:81] duration metric: took 177.31996ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.749179   11447 pod_ready.go:38] duration metric: took 12.249789309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:24.749199   11447 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:10:24.749257   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:10:24.784712   11447 api_server.go:70] duration metric: took 12.674498021s to wait for apiserver process to appear ...
	I0813 21:10:24.784740   11447 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:10:24.784753   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:10:24.793567   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 200:
	ok
	I0813 21:10:24.794892   11447 api_server.go:139] control plane version: v1.21.3
	I0813 21:10:24.794914   11447 api_server.go:129] duration metric: took 10.167822ms to wait for apiserver health ...
	I0813 21:10:24.794925   11447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:10:24.951664   11447 system_pods.go:59] 8 kube-system pods found
	I0813 21:10:24.951701   11447 system_pods.go:61] "coredns-558bd4d5db-jphw4" [057e9392-38dd-4c71-a09d-83ae9055347e] Running
	I0813 21:10:24.951709   11447 system_pods.go:61] "etcd-default-k8s-different-port-20210813210102-30853" [663c755b-7d29-4114-a1ff-2216c7e74716] Running
	I0813 21:10:24.951717   11447 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813210102-30853" [74f98aff-af48-4328-bee1-8f02162674db] Running
	I0813 21:10:24.951726   11447 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813210102-30853" [77d2d0a4-2421-4895-af76-15c395e6c465] Running
	I0813 21:10:24.951731   11447 system_pods.go:61] "kube-proxy-jn56d" [bf9beff3-8f15-4901-9886-ef5f0d821182] Running
	I0813 21:10:24.951736   11447 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813210102-30853" [21fdb84c-27b1-4592-8914-bf32c1b56ecf] Running
	I0813 21:10:24.951745   11447 system_pods.go:61] "metrics-server-7c784ccb57-cdhkk" [899ed30f-faf1-40e3-9a46-c1ad31aa7f70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:10:24.951753   11447 system_pods.go:61] "storage-provisioner" [3b577536-5550-42ee-a361-275f78e67c9e] Running
	I0813 21:10:24.951765   11447 system_pods.go:74] duration metric: took 156.833527ms to wait for pod list to return data ...
	I0813 21:10:24.951775   11447 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:10:25.148940   11447 default_sa.go:45] found service account: "default"
	I0813 21:10:25.148969   11447 default_sa.go:55] duration metric: took 197.176977ms for default service account to be created ...
	I0813 21:10:25.148984   11447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:10:25.352044   11447 system_pods.go:86] 8 kube-system pods found
	I0813 21:10:25.352084   11447 system_pods.go:89] "coredns-558bd4d5db-jphw4" [057e9392-38dd-4c71-a09d-83ae9055347e] Running
	I0813 21:10:25.352096   11447 system_pods.go:89] "etcd-default-k8s-different-port-20210813210102-30853" [663c755b-7d29-4114-a1ff-2216c7e74716] Running
	I0813 21:10:25.352103   11447 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210813210102-30853" [74f98aff-af48-4328-bee1-8f02162674db] Running
	I0813 21:10:25.352112   11447 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210813210102-30853" [77d2d0a4-2421-4895-af76-15c395e6c465] Running
	I0813 21:10:25.352119   11447 system_pods.go:89] "kube-proxy-jn56d" [bf9beff3-8f15-4901-9886-ef5f0d821182] Running
	I0813 21:10:25.352129   11447 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210813210102-30853" [21fdb84c-27b1-4592-8914-bf32c1b56ecf] Running
	I0813 21:10:25.352141   11447 system_pods.go:89] "metrics-server-7c784ccb57-cdhkk" [899ed30f-faf1-40e3-9a46-c1ad31aa7f70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:10:25.352150   11447 system_pods.go:89] "storage-provisioner" [3b577536-5550-42ee-a361-275f78e67c9e] Running
	I0813 21:10:25.352160   11447 system_pods.go:126] duration metric: took 203.170374ms to wait for k8s-apps to be running ...
	I0813 21:10:25.352177   11447 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:10:25.352232   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:25.366009   11447 system_svc.go:56] duration metric: took 13.82353ms WaitForService to wait for kubelet.
	I0813 21:10:25.366041   11447 kubeadm.go:547] duration metric: took 13.255833147s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:10:25.366078   11447 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:10:25.671992   11447 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:10:25.672026   11447 node_conditions.go:123] node cpu capacity is 2
	I0813 21:10:25.672045   11447 node_conditions.go:105] duration metric: took 305.961488ms to run NodePressure ...
	I0813 21:10:25.672058   11447 start.go:231] waiting for startup goroutines ...
	I0813 21:10:25.741468   11447 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 21:10:25.743555   11447 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210813210102-30853" cluster and "default" namespace by default
	I0813 21:10:29.004104   11600 out.go:204]   - Configuring RBAC rules ...
	I0813 21:10:29.713525   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:10:29.713570   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:10:25.680008   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:26.180477   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:26.680294   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:27.180411   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:27.679956   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:28.179559   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:28.679596   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.179509   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.679704   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:30.180325   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.715719   11600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:10:29.715784   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:10:29.736151   11600 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:10:29.781971   11600 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:10:29.782030   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.782090   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=no-preload-20210813205915-30853 minikube.k8s.io/updated_at=2021_08_13T21_10_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.830681   11600 ops.go:34] apiserver oom_adj: -16
	I0813 21:10:30.150647   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:30.779463   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.280355   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.779613   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:30.680059   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.180084   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.679975   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:32.179732   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:32.679873   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.179878   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.679567   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.180100   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.679513   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.825619   12791 kubeadm.go:985] duration metric: took 12.084819945s to wait for elevateKubeSystemPrivileges.
	I0813 21:10:34.825653   12791 kubeadm.go:392] StartCluster complete in 48.962278505s
	I0813 21:10:34.825676   12791 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:34.825790   12791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:10:34.827844   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:35.357758   12791 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210813210910-30853" rescaled to 1
	I0813 21:10:35.357830   12791 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:10:35.359667   12791 out.go:177] * Verifying Kubernetes components...
	I0813 21:10:35.357884   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:10:35.357927   12791 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 21:10:35.358131   12791 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:10:35.359798   12791 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210813210910-30853"
	I0813 21:10:35.359818   12791 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:35.359820   12791 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210813210910-30853"
	W0813 21:10:35.359828   12791 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:10:35.359855   12791 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:10:35.359852   12791 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210813210910-30853"
	I0813 21:10:35.359908   12791 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210813210910-30853"
	I0813 21:10:35.360333   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.360381   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.360414   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.360455   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.374986   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42203
	I0813 21:10:35.375050   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0813 21:10:35.375635   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.375910   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.377813   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.377836   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.377912   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.377925   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.378238   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.378810   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.378869   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.379811   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.380004   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:10:35.391384   12791 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210813210910-30853"
	W0813 21:10:35.391410   12791 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:10:35.391438   12791 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:10:35.391832   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.391897   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.391999   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0813 21:10:35.392393   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.392989   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.393014   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.393496   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.393691   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:10:35.397628   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:10:35.400074   12791 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:10:35.400221   12791 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:35.400233   12791 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:10:35.400253   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:10:35.406732   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39751
	I0813 21:10:35.407200   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.407553   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.407703   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.407724   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.408324   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:10:35.408333   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:10:35.408348   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.408363   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.408489   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:10:35.408643   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:10:35.408815   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:10:35.409189   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.409266   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.424756   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0813 21:10:35.425178   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.425688   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.425717   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.426032   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.426208   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:10:35.429530   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:10:35.429754   12791 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:35.429775   12791 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:10:35.429797   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:10:35.436000   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.436628   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:10:35.436664   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.436775   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:10:35.436942   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:10:35.437117   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:10:35.437291   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:10:35.594125   12791 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:32.279420   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:32.780066   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.280227   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.779756   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.280100   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.779428   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:35.279470   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:35.779478   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:36.279401   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:36.779390   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:35.796621   12791 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:36.020007   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:10:36.022097   12791 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:10:36.022141   12791 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:10:37.953285   12791 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.359113303s)
	I0813 21:10:37.953357   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:37.953374   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:37.953716   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:37.953737   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:37.953747   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:37.953764   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:37.954032   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:37.954047   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.018145   12791 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.221484906s)
	I0813 21:10:38.018195   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:38.018210   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:38.018146   12791 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.995992413s)
	I0813 21:10:38.018276   12791 api_server.go:70] duration metric: took 2.660410949s to wait for apiserver process to appear ...
	I0813 21:10:38.018284   12791 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:10:38.018293   12791 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:10:38.018510   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:38.018529   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.018538   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:38.018547   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:38.018806   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:38.018828   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.018842   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:38.018866   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:38.019228   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:10:38.019231   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:38.019253   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.021307   12791 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 21:10:38.021330   12791 addons.go:344] enableAddons completed in 2.663409626s
	I0813 21:10:38.037183   12791 api_server.go:265] https://192.168.39.210:8443/healthz returned 200:
	ok
	I0813 21:10:38.040155   12791 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:10:38.040215   12791 api_server.go:129] duration metric: took 21.924445ms to wait for apiserver health ...
	I0813 21:10:38.040228   12791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:10:38.072532   12791 system_pods.go:59] 8 kube-system pods found
	I0813 21:10:38.072583   12791 system_pods.go:61] "coredns-78fcd69978-42frp" [ffc12ff0-fe4e-422b-ae81-83f17416e379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:10:38.072594   12791 system_pods.go:61] "coredns-78fcd69978-bc587" [0d2dab50-994b-4314-8922-0e8a913a9b26] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:10:38.072605   12791 system_pods.go:61] "etcd-newest-cni-20210813210910-30853" [a6811fb7-a94c-45db-91d0-34c033aa1eab] Running
	I0813 21:10:38.072623   12791 system_pods.go:61] "kube-apiserver-newest-cni-20210813210910-30853" [bdcdda0b-8c06-4c71-8f0a-66d55d331267] Running
	I0813 21:10:38.072630   12791 system_pods.go:61] "kube-controller-manager-newest-cni-20210813210910-30853" [374fba93-8efe-439f-8aec-50ae02d227e3] Running
	I0813 21:10:38.072639   12791 system_pods.go:61] "kube-proxy-qt9ld" [4e36061f-0559-4cde-9b0a-b5cb328d0d76] Running
	I0813 21:10:38.072646   12791 system_pods.go:61] "kube-scheduler-newest-cni-20210813210910-30853" [bdf4950a-8d5e-434c-8c99-20e475c71f65] Running
	I0813 21:10:38.072656   12791 system_pods.go:61] "storage-provisioner" [5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 21:10:38.072667   12791 system_pods.go:74] duration metric: took 32.432184ms to wait for pod list to return data ...
	I0813 21:10:38.072681   12791 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:10:38.079488   12791 default_sa.go:45] found service account: "default"
	I0813 21:10:38.079509   12791 default_sa.go:55] duration metric: took 6.821814ms for default service account to be created ...
	I0813 21:10:38.079522   12791 kubeadm.go:547] duration metric: took 2.721660353s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0813 21:10:38.079544   12791 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:10:38.087838   12791 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.06779332s)
	I0813 21:10:38.087870   12791 start.go:728] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 21:10:38.089094   12791 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:10:38.089130   12791 node_conditions.go:123] node cpu capacity is 2
	I0813 21:10:38.089146   12791 node_conditions.go:105] duration metric: took 9.595836ms to run NodePressure ...
	I0813 21:10:38.089160   12791 start.go:231] waiting for startup goroutines ...
	I0813 21:10:38.151075   12791 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 21:10:38.152833   12791 out.go:177] 
	W0813 21:10:38.153012   12791 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 21:10:38.154648   12791 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:10:38.156287   12791 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813210910-30853" cluster and "default" namespace by default
	I0813 21:10:37.279672   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:37.780229   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:38.279437   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:38.780138   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:39.279696   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:39.780100   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:40.279336   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:40.780189   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:41.279752   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:41.780283   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:42.280242   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:42.595734   11600 kubeadm.go:985] duration metric: took 12.813777513s to wait for elevateKubeSystemPrivileges.
	I0813 21:10:42.595765   11600 kubeadm.go:392] StartCluster complete in 6m12.527422021s
	I0813 21:10:42.595790   11600 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:42.595896   11600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:10:42.597520   11600 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:43.236927   11600 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210813205915-30853" rescaled to 1
	I0813 21:10:43.236992   11600 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.105.107 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:10:43.239406   11600 out.go:177] * Verifying Kubernetes components...
	I0813 21:10:43.239457   11600 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:43.237045   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:10:43.237068   11600 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:10:43.239565   11600 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210813205915-30853"
	I0813 21:10:43.237236   11600 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:10:43.239587   11600 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210813205915-30853"
	W0813 21:10:43.239595   11600 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:10:43.239629   11600 host.go:66] Checking if "no-preload-20210813205915-30853" exists ...
	I0813 21:10:43.239632   11600 addons.go:59] Setting metrics-server=true in profile "no-preload-20210813205915-30853"
	I0813 21:10:43.239635   11600 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210813205915-30853"
	I0813 21:10:43.239647   11600 addons.go:135] Setting addon metrics-server=true in "no-preload-20210813205915-30853"
	W0813 21:10:43.239656   11600 addons.go:147] addon metrics-server should already be in state true
	I0813 21:10:43.239658   11600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210813205915-30853"
	I0813 21:10:43.239683   11600 host.go:66] Checking if "no-preload-20210813205915-30853" exists ...
	I0813 21:10:43.239570   11600 addons.go:59] Setting dashboard=true in profile "no-preload-20210813205915-30853"
	I0813 21:10:43.239728   11600 addons.go:135] Setting addon dashboard=true in "no-preload-20210813205915-30853"
	W0813 21:10:43.239746   11600 addons.go:147] addon dashboard should already be in state true
	I0813 21:10:43.239775   11600 host.go:66] Checking if "no-preload-20210813205915-30853" exists ...
	I0813 21:10:43.240104   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.240104   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.240104   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.240150   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.240220   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.240239   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.240255   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.240314   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.252172   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44137
	I0813 21:10:43.252624   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.253172   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.253192   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.253594   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.254174   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.254214   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.254494   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46479
	I0813 21:10:43.254933   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.255405   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.255426   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.255490   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34649
	I0813 21:10:43.255831   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.256032   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.256290   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.256307   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.256603   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.256646   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.256747   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.256913   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:10:43.266911   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33871
	I0813 21:10:43.267347   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.267815   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.267839   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.268171   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.268724   11600 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210813205915-30853"
	W0813 21:10:43.268749   11600 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:10:43.268762   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.268778   11600 host.go:66] Checking if "no-preload-20210813205915-30853" exists ...
	I0813 21:10:43.268800   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.269179   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.269231   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.270737   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0813 21:10:43.271117   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.271588   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.271614   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.271955   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.272130   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:10:43.275862   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:10:43.278011   11600 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:10:43.278087   11600 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:10:43.278099   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:10:43.278122   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:10:43.280649   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0813 21:10:43.281018   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34451
	I0813 21:10:43.281258   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.281705   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.281820   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.281840   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.282233   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.282382   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.282400   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.282403   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:10:43.282772   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.282933   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:10:43.286320   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:10:43.286532   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.288191   11600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:10:43.286938   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:10:43.288312   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.287193   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:10:43.287628   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:10:43.288362   11600 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:43.288376   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:10:43.288396   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:10:43.288523   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:10:43.289968   11600 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:10:43.288678   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:10:43.291508   11600 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:10:43.291568   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:10:43.291579   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:10:43.290296   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:10:43.291596   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:10:43.292931   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0813 21:10:43.293290   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.293838   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.293859   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.294224   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.294793   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.294930   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.296172   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.296766   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:10:43.296794   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.297070   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:10:43.297233   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:10:43.297402   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:10:43.297537   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:10:43.298841   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.299283   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:10:43.299312   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.299430   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:10:43.299586   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:10:43.299727   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:10:43.299911   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:10:43.308859   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40143
	I0813 21:10:43.309223   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.309689   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.309713   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.310081   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.310261   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:10:43.312995   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:10:43.313192   11600 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:43.313207   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:10:43.313224   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:10:43.318697   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.319136   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:10:43.319164   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.319284   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:10:43.319423   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:10:43.319563   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:10:43.319647   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:10:43.415710   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:10:43.415702   11600 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210813205915-30853" to be "Ready" ...
	I0813 21:10:43.421333   11600 node_ready.go:49] node "no-preload-20210813205915-30853" has status "Ready":"True"
	I0813 21:10:43.421346   11600 node_ready.go:38] duration metric: took 5.531339ms waiting for node "no-preload-20210813205915-30853" to be "Ready" ...
	I0813 21:10:43.421356   11600 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:43.428420   11600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:43.449946   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:10:43.449967   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:10:43.498458   11600 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:43.513925   11600 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:43.518020   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:10:43.518039   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:10:43.528422   11600 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:10:43.528442   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:10:43.587727   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:10:43.587758   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:10:43.590766   11600 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:10:43.590788   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:10:43.656475   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:10:43.656504   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:10:43.677102   11600 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:10:43.677125   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:10:43.739364   11600 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:10:43.741528   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:10:43.741548   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:10:43.865339   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:10:43.865366   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:10:43.945836   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:10:43.945863   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:10:44.183060   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:10:44.183089   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:10:44.293405   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:10:44.293435   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:10:44.354576   11600 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:10:45.209715   11600 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.793954911s)
	I0813 21:10:45.209754   11600 start.go:728] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS
	I0813 21:10:45.451507   11600 pod_ready.go:102] pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:45.768509   11600 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.254548453s)
	I0813 21:10:45.768554   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:45.768568   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:45.768630   11600 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.270149498s)
	I0813 21:10:45.768648   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:45.768657   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:45.768844   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:45.768865   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:45.768875   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:45.768889   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:45.768988   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:45.768993   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Closing plugin on server side
	I0813 21:10:45.769003   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:45.769017   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:45.769029   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:45.769078   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Closing plugin on server side
	I0813 21:10:45.769100   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:45.769116   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:45.769133   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:45.769142   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:45.769244   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Closing plugin on server side
	I0813 21:10:45.769252   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:45.769266   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:45.770445   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:45.770461   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:46.554301   11600 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.814896952s)
	I0813 21:10:46.554352   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:46.554372   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:46.554651   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:46.554666   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:46.554675   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:46.554682   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:46.554919   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:46.554933   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:46.554943   11600 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210813205915-30853"
	I0813 21:10:46.554967   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Closing plugin on server side
	I0813 21:10:47.488790   11600 pod_ready.go:102] pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:47.928364   11600 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.573734309s)
	I0813 21:10:47.928421   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:47.928444   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:47.928734   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Closing plugin on server side
	I0813 21:10:47.928749   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:47.928765   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:47.928786   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:47.928801   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:47.929007   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:47.929021   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:47.931002   11600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 21:10:47.931024   11600 addons.go:344] enableAddons completed in 4.693964191s
	I0813 21:10:49.943997   11600 pod_ready.go:102] pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:51.946699   11600 pod_ready.go:102] pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:54.448800   11600 pod_ready.go:102] pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:56.946503   11600 pod_ready.go:102] pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:57.445648   11600 pod_ready.go:97] error getting pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-8cmv5" not found
	I0813 21:10:57.445683   11600 pod_ready.go:81] duration metric: took 14.017237637s waiting for pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace to be "Ready" ...
	E0813 21:10:57.445697   11600 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-8cmv5" not found
	I0813 21:10:57.445707   11600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-djqln" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.458150   11600 pod_ready.go:92] pod "coredns-78fcd69978-djqln" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:57.458167   11600 pod_ready.go:81] duration metric: took 12.453041ms waiting for pod "coredns-78fcd69978-djqln" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.458177   11600 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.470140   11600 pod_ready.go:92] pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:57.470159   11600 pod_ready.go:81] duration metric: took 11.975627ms waiting for pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.470169   11600 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.485954   11600 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:57.485976   11600 pod_ready.go:81] duration metric: took 15.799825ms waiting for pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.485988   11600 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.495923   11600 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:57.495948   11600 pod_ready.go:81] duration metric: took 9.9489ms waiting for pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.495962   11600 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pm8kf" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.641582   11600 pod_ready.go:92] pod "kube-proxy-pm8kf" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:57.641605   11600 pod_ready.go:81] duration metric: took 145.634184ms waiting for pod "kube-proxy-pm8kf" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.641618   11600 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:58.057929   11600 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:58.057949   11600 pod_ready.go:81] duration metric: took 416.322441ms waiting for pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:58.057958   11600 pod_ready.go:38] duration metric: took 14.636591071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:58.057974   11600 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:10:58.058016   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:10:58.072729   11600 api_server.go:70] duration metric: took 14.835697758s to wait for apiserver process to appear ...
	I0813 21:10:58.072753   11600 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:10:58.072764   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:10:58.080263   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 200:
	ok
	I0813 21:10:58.082131   11600 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:10:58.082151   11600 api_server.go:129] duration metric: took 9.390895ms to wait for apiserver health ...
	I0813 21:10:58.082162   11600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:10:58.250891   11600 system_pods.go:59] 8 kube-system pods found
	I0813 21:10:58.250963   11600 system_pods.go:61] "coredns-78fcd69978-djqln" [99eb1cf7-bc30-4c62-a70a-1d529bd0c68b] Running
	I0813 21:10:58.250985   11600 system_pods.go:61] "etcd-no-preload-20210813205915-30853" [242eb16a-dc65-4352-beb3-09cd64be834c] Running
	I0813 21:10:58.251008   11600 system_pods.go:61] "kube-apiserver-no-preload-20210813205915-30853" [9293ee98-b7b3-47f2-b7bd-4614b8482ca1] Running
	I0813 21:10:58.251025   11600 system_pods.go:61] "kube-controller-manager-no-preload-20210813205915-30853" [91eee213-027e-4385-ab9c-23a1edf8ccde] Running
	I0813 21:10:58.251033   11600 system_pods.go:61] "kube-proxy-pm8kf" [94304ca2-43ad-479d-b0cf-0d034dd53c30] Running
	I0813 21:10:58.251042   11600 system_pods.go:61] "kube-scheduler-no-preload-20210813205915-30853" [63cdc1cb-db75-4391-a159-9f351f3f189b] Running
	I0813 21:10:58.251060   11600 system_pods.go:61] "metrics-server-7c784ccb57-sjf7l" [1a8eb8de-eb5b-4305-9a3c-0f560914ed99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:10:58.251071   11600 system_pods.go:61] "storage-provisioner" [7701997b-7e28-4be2-925c-50ca1dd46b4e] Running
	I0813 21:10:58.251085   11600 system_pods.go:74] duration metric: took 168.915852ms to wait for pod list to return data ...
	I0813 21:10:58.251100   11600 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:10:58.441502   11600 default_sa.go:45] found service account: "default"
	I0813 21:10:58.441527   11600 default_sa.go:55] duration metric: took 190.416989ms for default service account to be created ...
	I0813 21:10:58.441539   11600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:10:58.645345   11600 system_pods.go:86] 8 kube-system pods found
	I0813 21:10:58.645376   11600 system_pods.go:89] "coredns-78fcd69978-djqln" [99eb1cf7-bc30-4c62-a70a-1d529bd0c68b] Running
	I0813 21:10:58.645382   11600 system_pods.go:89] "etcd-no-preload-20210813205915-30853" [242eb16a-dc65-4352-beb3-09cd64be834c] Running
	I0813 21:10:58.645386   11600 system_pods.go:89] "kube-apiserver-no-preload-20210813205915-30853" [9293ee98-b7b3-47f2-b7bd-4614b8482ca1] Running
	I0813 21:10:58.645391   11600 system_pods.go:89] "kube-controller-manager-no-preload-20210813205915-30853" [91eee213-027e-4385-ab9c-23a1edf8ccde] Running
	I0813 21:10:58.645395   11600 system_pods.go:89] "kube-proxy-pm8kf" [94304ca2-43ad-479d-b0cf-0d034dd53c30] Running
	I0813 21:10:58.645400   11600 system_pods.go:89] "kube-scheduler-no-preload-20210813205915-30853" [63cdc1cb-db75-4391-a159-9f351f3f189b] Running
	I0813 21:10:58.645412   11600 system_pods.go:89] "metrics-server-7c784ccb57-sjf7l" [1a8eb8de-eb5b-4305-9a3c-0f560914ed99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:10:58.645418   11600 system_pods.go:89] "storage-provisioner" [7701997b-7e28-4be2-925c-50ca1dd46b4e] Running
	I0813 21:10:58.645427   11600 system_pods.go:126] duration metric: took 203.88379ms to wait for k8s-apps to be running ...
	I0813 21:10:58.645458   11600 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:10:58.645508   11600 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:58.657715   11600 system_svc.go:56] duration metric: took 12.247152ms WaitForService to wait for kubelet.
	I0813 21:10:58.657747   11600 kubeadm.go:547] duration metric: took 15.420720912s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:10:58.657776   11600 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:10:58.842378   11600 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:10:58.842407   11600 node_conditions.go:123] node cpu capacity is 2
	I0813 21:10:58.842421   11600 node_conditions.go:105] duration metric: took 184.639144ms to run NodePressure ...
	I0813 21:10:58.842431   11600 start.go:231] waiting for startup goroutines ...
	I0813 21:10:58.885572   11600 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 21:10:58.887709   11600 out.go:177] 
	W0813 21:10:58.887907   11600 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 21:10:58.889610   11600 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:10:58.891372   11600 out.go:177] * Done! kubectl is now configured to use "no-preload-20210813205915-30853" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 21:03:59 UTC, end at Fri 2021-08-13 21:11:18 UTC. --
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.101245155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="go-grpc-middleware/chain.go:25" id=f82e0052-b500-4037-9f11-72c3e9f070ea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.762196496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6fe2fe4f-8b9d-4777-9de0-01781a2f4048 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.762345034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6fe2fe4f-8b9d-4777-9de0-01781a2f4048 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.762620466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6fe2fe4f-8b9d-4777-9de0-01781a2f4048 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.800489644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=35b830c8-7649-425c-a1f4-661f8e712ea4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.800627557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=35b830c8-7649-425c-a1f4-661f8e712ea4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.800905014Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=35b830c8-7649-425c-a1f4-661f8e712ea4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.839528743Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8335b8dd-93b9-43d3-948e-a4b01c963c6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.839672818Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8335b8dd-93b9-43d3-948e-a4b01c963c6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.839922215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8335b8dd-93b9-43d3-948e-a4b01c963c6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.882281136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8eae8808-b90e-4e5e-804f-873115132f53 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.882510753Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8eae8808-b90e-4e5e-804f-873115132f53 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.882757425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8eae8808-b90e-4e5e-804f-873115132f53 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.920430535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a2a9cfcf-7e2e-454f-b1bc-2a803389a7f0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.920582154Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a2a9cfcf-7e2e-454f-b1bc-2a803389a7f0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.920801232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a2a9cfcf-7e2e-454f-b1bc-2a803389a7f0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.963263445Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8855a564-a50f-49f4-ad24-4ff3c1a2eaec name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.963455201Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8855a564-a50f-49f4-ad24-4ff3c1a2eaec name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:17 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:17.964149929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8855a564-a50f-49f4-ad24-4ff3c1a2eaec name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:18 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:18.001986025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0cbb0b78-c92b-4b6b-9692-e3c064deea28 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:18 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:18.002205817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0cbb0b78-c92b-4b6b-9692-e3c064deea28 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:18 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:18.002466616Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0cbb0b78-c92b-4b6b-9692-e3c064deea28 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:18 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:18.043197365Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=37b01da7-9bd5-41d3-a4cb-61cb0f28b6f7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:18 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:18.043335238Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=37b01da7-9bd5-41d3-a4cb-61cb0f28b6f7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:18 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:18.043615469Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=37b01da7-9bd5-41d3-a4cb-61cb0f28b6f7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID
	b9047880f9040       docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f   14 seconds ago      Running             kubernetes-dashboard        0                   87d91f84471b4
	2b6e74c197286       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   1                   0bb2fe30a9e8f
	efd6ad12aeb56       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           28 seconds ago      Running             storage-provisioner         0                   023a3746224ec
	25c01a205c3e2       ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c                                           29 seconds ago      Running             kube-proxy                  0                   6e72a1de7959a
	7dccf87cedabe       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44                                           30 seconds ago      Running             coredns                     0                   d70372cc3523e
	ca410dc379be2       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44                                           31 seconds ago      Exited              coredns                     0                   7f0eb16b3e187
	9212298dc475e       0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba                                           58 seconds ago      Running             etcd                        2                   5eb86fb89bad6
	34a67fc4c35df       7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75                                           58 seconds ago      Running             kube-scheduler              2                   24d192076c6ef
	94a7894b63ddd       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c                                           59 seconds ago      Running             kube-controller-manager     3                   5c6071832ca8b
	fa3d77da505a8       b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a                                           59 seconds ago      Running             kube-apiserver              2                   9394e33d36b02
	
	* 
	* ==> coredns [7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> coredns [ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20210813205915-30853
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20210813205915-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=no-preload-20210813205915-30853
	                    minikube.k8s.io/updated_at=2021_08_13T21_10_29_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 21:10:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20210813205915-30853
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 21:11:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 21:11:05 +0000   Fri, 13 Aug 2021 21:10:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 21:11:05 +0000   Fri, 13 Aug 2021 21:10:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 21:11:05 +0000   Fri, 13 Aug 2021 21:10:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 21:11:05 +0000   Fri, 13 Aug 2021 21:10:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.107
	  Hostname:    no-preload-20210813205915-30853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e63b2204c374f92b3ae1588b0df2556
	  System UUID:                4e63b220-4c37-4f92-b3ae-1588b0df2556
	  Boot ID:                    f8ffe43d-74d2-470d-ae6c-3ef2eea0cc3d
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.22.0-rc.0
	  Kube-Proxy Version:         v1.22.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-djqln                                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     36s
	  kube-system                 etcd-no-preload-20210813205915-30853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         43s
	  kube-system                 kube-apiserver-no-preload-20210813205915-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-controller-manager-no-preload-20210813205915-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-proxy-pm8kf                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-scheduler-no-preload-20210813205915-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 metrics-server-7c784ccb57-sjf7l                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (14%!)(MISSING)      0 (0%!)(MISSING)         32s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-jq4mn                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-vl8vp                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             470Mi (22%!)(MISSING)  170Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  62s (x6 over 62s)  kubelet  Node no-preload-20210813205915-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x6 over 62s)  kubelet  Node no-preload-20210813205915-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x5 over 62s)  kubelet  Node no-preload-20210813205915-30853 status is now: NodeHasSufficientPID
	  Normal  Starting                 44s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s                kubelet  Node no-preload-20210813205915-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s                kubelet  Node no-preload-20210813205915-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s                kubelet  Node no-preload-20210813205915-30853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                36s                kubelet  Node no-preload-20210813205915-30853 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.033491] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.024807] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1718 comm=systemd-network
	[Aug13 21:04] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.357120] vboxguest: loading out-of-tree module taints kernel.
	[  +0.014328] vboxguest: PCI device not found, probably running on physical hardware.
	[  +3.482256] systemd-fstab-generator[2115]: Ignoring "noauto" for root device
	[  +0.157515] systemd-fstab-generator[2128]: Ignoring "noauto" for root device
	[  +0.205705] systemd-fstab-generator[2154]: Ignoring "noauto" for root device
	[ +29.221301] systemd-fstab-generator[2922]: Ignoring "noauto" for root device
	[Aug13 21:05] kauditd_printk_skb: 38 callbacks suppressed
	[ +11.840286] kauditd_printk_skb: 89 callbacks suppressed
	[Aug13 21:06] NFSD: Unable to end grace period: -110
	[Aug13 21:09] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.154978] kauditd_printk_skb: 14 callbacks suppressed
	[ +16.228089] kauditd_printk_skb: 14 callbacks suppressed
	[Aug13 21:10] systemd-fstab-generator[5166]: Ignoring "noauto" for root device
	[ +18.484011] systemd-fstab-generator[5528]: Ignoring "noauto" for root device
	[ +15.353956] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.119674] kauditd_printk_skb: 77 callbacks suppressed
	[  +7.555194] kauditd_printk_skb: 32 callbacks suppressed
	[Aug13 21:11] kauditd_printk_skb: 8 callbacks suppressed
	[ +10.735187] systemd-fstab-generator[7065]: Ignoring "noauto" for root device
	[  +0.827429] systemd-fstab-generator[7121]: Ignoring "noauto" for root device
	[  +1.026098] systemd-fstab-generator[7175]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357] <==
	* {"level":"info","ts":"2021-08-13T21:10:20.982Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"96c7252540d6160b","local-member-attributes":"{Name:no-preload-20210813205915-30853 ClientURLs:[https://192.168.105.107:2379]}","request-path":"/0/members/96c7252540d6160b/attributes","cluster-id":"ceda1e46dcc8afbb","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-13T21:10:20.984Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T21:10:20.985Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.105.107:2379"}
	{"level":"info","ts":"2021-08-13T21:10:20.985Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T21:10:20.986Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T21:10:20.990Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-08-13T21:10:20.990Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-13T21:10:20.990Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-13T21:10:20.996Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"ceda1e46dcc8afbb","local-member-id":"96c7252540d6160b","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T21:10:20.999Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T21:10:20.999Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2021-08-13T21:10:25.674Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.550074ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1588498814190764221 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.events.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.events.k8s.io\" value_size:887 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2021-08-13T21:10:25.674Z","caller":"traceutil/trace.go:171","msg":"trace[466367915] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"183.081565ms","start":"2021-08-13T21:10:25.491Z","end":"2021-08-13T21:10:25.674Z","steps":["trace[466367915] 'process raft request'  (duration: 63.556934ms)","trace[466367915] 'compare'  (duration: 117.56781ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T21:10:25.675Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"183.881298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2021-08-13T21:10:25.675Z","caller":"traceutil/trace.go:171","msg":"trace[1359346222] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:0; response_revision:23; }","duration":"184.101664ms","start":"2021-08-13T21:10:25.491Z","end":"2021-08-13T21:10:25.675Z","steps":["trace[1359346222] 'agreement among raft nodes before linearized reading'  (duration: 63.471235ms)","trace[1359346222] 'range keys from in-memory index tree'  (duration: 120.372978ms)"],"step_count":2}
	{"level":"info","ts":"2021-08-13T21:10:25.676Z","caller":"traceutil/trace.go:171","msg":"trace[94189713] linearizableReadLoop","detail":"{readStateIndex:32; appliedIndex:26; }","duration":"121.583518ms","start":"2021-08-13T21:10:25.554Z","end":"2021-08-13T21:10:25.676Z","steps":["trace[94189713] 'read index received'  (duration: 118.465448ms)","trace[94189713] 'applied index is now lower than readState.Index'  (duration: 3.105664ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T21:10:25.676Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"184.865694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/kube-system/\" range_end:\"/registry/resourcequotas/kube-system0\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2021-08-13T21:10:25.676Z","caller":"traceutil/trace.go:171","msg":"trace[1201218958] range","detail":"{range_begin:/registry/resourcequotas/kube-system/; range_end:/registry/resourcequotas/kube-system0; response_count:0; response_revision:29; }","duration":"185.047263ms","start":"2021-08-13T21:10:25.491Z","end":"2021-08-13T21:10:25.676Z","steps":["trace[1201218958] 'agreement among raft nodes before linearized reading'  (duration: 184.715477ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T21:10:25.677Z","caller":"traceutil/trace.go:171","msg":"trace[419869988] transaction","detail":"{read_only:false; response_revision:25; number_of_response:1; }","duration":"185.538499ms","start":"2021-08-13T21:10:25.491Z","end":"2021-08-13T21:10:25.677Z","steps":["trace[419869988] 'process raft request'  (duration: 182.830629ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T21:10:25.677Z","caller":"traceutil/trace.go:171","msg":"trace[21709021] transaction","detail":"{read_only:false; response_revision:26; number_of_response:1; }","duration":"183.435038ms","start":"2021-08-13T21:10:25.494Z","end":"2021-08-13T21:10:25.677Z","steps":["trace[21709021] 'process raft request'  (duration: 181.472896ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T21:10:25.678Z","caller":"traceutil/trace.go:171","msg":"trace[39482200] transaction","detail":"{read_only:false; response_revision:27; number_of_response:1; }","duration":"181.216756ms","start":"2021-08-13T21:10:25.496Z","end":"2021-08-13T21:10:25.678Z","steps":["trace[39482200] 'process raft request'  (duration: 178.96379ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T21:10:25.678Z","caller":"traceutil/trace.go:171","msg":"trace[2091172688] transaction","detail":"{read_only:false; response_revision:28; number_of_response:1; }","duration":"179.854409ms","start":"2021-08-13T21:10:25.498Z","end":"2021-08-13T21:10:25.678Z","steps":["trace[2091172688] 'process raft request'  (duration: 177.412662ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T21:10:25.678Z","caller":"traceutil/trace.go:171","msg":"trace[1955463242] transaction","detail":"{read_only:false; response_revision:29; number_of_response:1; }","duration":"179.886275ms","start":"2021-08-13T21:10:25.498Z","end":"2021-08-13T21:10:25.678Z","steps":["trace[1955463242] 'process raft request'  (duration: 177.391033ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:10:25.679Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"130.345052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2021-08-13T21:10:25.679Z","caller":"traceutil/trace.go:171","msg":"trace[52079535] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:0; response_revision:29; }","duration":"130.525353ms","start":"2021-08-13T21:10:25.548Z","end":"2021-08-13T21:10:25.679Z","steps":["trace[52079535] 'agreement among raft nodes before linearized reading'  (duration: 130.295644ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  21:11:18 up 7 min,  0 users,  load average: 2.01, 0.84, 0.39
	Linux no-preload-20210813205915-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455] <==
	* I0813 21:10:25.401281       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0813 21:10:25.411578       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0813 21:10:25.415749       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 21:10:25.418890       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 21:10:25.479256       1 controller.go:611] quota admission added evaluator for: namespaces
	I0813 21:10:26.193514       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 21:10:26.193656       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 21:10:26.217267       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 21:10:26.229620       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 21:10:26.229750       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 21:10:27.383464       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 21:10:27.475877       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 21:10:27.600268       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.105.107]
	I0813 21:10:27.601969       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 21:10:27.616325       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 21:10:28.374203       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 21:10:29.548700       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 21:10:29.677140       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 21:10:34.943595       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 21:10:42.061016       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 21:10:42.293352       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	W0813 21:10:48.844616       1 handler_proxy.go:104] no RequestInfo found in the context
	E0813 21:10:48.844896       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 21:10:48.844988       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b] <==
	* E0813 21:10:46.838462       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:46.872718       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0813 21:10:46.905564       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0813 21:10:46.927969       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:46.933545       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:46.982501       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:10:46.982953       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:46.983342       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:10:47.026346       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:10:47.066378       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:47.067188       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:47.095653       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:47.096290       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:47.128994       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:47.129551       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:47.144354       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:47.144845       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:47.158940       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:47.159304       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:47.174487       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:47.175186       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:10:47.209857       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-vl8vp"
	I0813 21:10:47.282990       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-jq4mn"
	E0813 21:11:12.138998       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 21:11:12.664368       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c] <==
	* I0813 21:10:48.853441       1 node.go:172] Successfully retrieved node IP: 192.168.105.107
	I0813 21:10:48.853627       1 server_others.go:140] Detected node IP 192.168.105.107
	W0813 21:10:48.853660       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	W0813 21:10:48.967405       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 21:10:48.967513       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 21:10:48.967532       1 server_others.go:212] Using iptables Proxier.
	I0813 21:10:48.967851       1 server.go:649] Version: v1.22.0-rc.0
	I0813 21:10:48.982302       1 config.go:224] Starting endpoint slice config controller
	I0813 21:10:48.982408       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 21:10:48.982502       1 config.go:315] Starting service config controller
	I0813 21:10:48.982589       1 shared_informer.go:240] Waiting for caches to sync for service config
	E0813 21:10:48.996165       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210813205915-30853.169af9f5b64b81d6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd5d639bafe6d, ext:384291207, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210813205915-30853", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Nam
e:"no-preload-20210813205915-30853", UID:"no-preload-20210813205915-30853", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210813205915-30853.169af9f5b64b81d6" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 21:10:49.082710       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 21:10:49.107748       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0] <==
	* E0813 21:10:25.422879       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:10:25.431464       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:10:25.431869       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:10:25.432124       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:10:25.432305       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:10:25.432483       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:10:25.432748       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:10:25.432922       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:10:25.436389       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:10:26.310536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:10:26.415612       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:10:26.475529       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:10:26.497944       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:10:26.503605       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:10:26.532670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 21:10:26.709350       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:10:26.757967       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:10:26.775464       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:10:26.776592       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:10:26.790626       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:10:26.854351       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:10:26.886705       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:10:26.935133       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:10:27.009423       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0813 21:10:28.990359       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:03:59 UTC, end at Fri 2021-08-13 21:11:18 UTC. --
	Aug 13 21:10:54 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:54.984645    5537 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd"
	Aug 13 21:10:56 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:56.104363    5537 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x5kj\" (UniqueName: \"kubernetes.io/projected/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16-kube-api-access-8x5kj\") pod \"b2c79edd-1c3d-4902-9ba5-604d2bf0cb16\" (UID: \"b2c79edd-1c3d-4902-9ba5-604d2bf0cb16\") "
	Aug 13 21:10:56 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:56.104415    5537 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16-config-volume\") pod \"b2c79edd-1c3d-4902-9ba5-604d2bf0cb16\" (UID: \"b2c79edd-1c3d-4902-9ba5-604d2bf0cb16\") "
	Aug 13 21:10:56 no-preload-20210813205915-30853 kubelet[5537]: W0813 21:10:56.106628    5537 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 13 21:10:56 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:56.107641    5537 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16-config-volume" (OuterVolumeSpecName: "config-volume") pod "b2c79edd-1c3d-4902-9ba5-604d2bf0cb16" (UID: "b2c79edd-1c3d-4902-9ba5-604d2bf0cb16"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 21:10:56 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:56.115778    5537 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16-kube-api-access-8x5kj" (OuterVolumeSpecName: "kube-api-access-8x5kj") pod "b2c79edd-1c3d-4902-9ba5-604d2bf0cb16" (UID: "b2c79edd-1c3d-4902-9ba5-604d2bf0cb16"). InnerVolumeSpecName "kube-api-access-8x5kj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 21:10:56 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:56.205382    5537 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16-config-volume\") on node \"no-preload-20210813205915-30853\" DevicePath \"\""
	Aug 13 21:10:56 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:56.205417    5537 reconciler.go:319] "Volume detached for volume \"kube-api-access-8x5kj\" (UniqueName: \"kubernetes.io/projected/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16-kube-api-access-8x5kj\") on node \"no-preload-20210813205915-30853\" DevicePath \"\""
	Aug 13 21:10:57 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:57.118281    5537 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b2c79edd-1c3d-4902-9ba5-604d2bf0cb16 path="/var/lib/kubelet/pods/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16/volumes"
	Aug 13 21:10:59 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:59.019228    5537 scope.go:110] "RemoveContainer" containerID="e8819b1c4b8b8e0d8501b29e570d8970a455be807cb8584920ed19f31e409ccf"
	Aug 13 21:11:00 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:11:00.033240    5537 scope.go:110] "RemoveContainer" containerID="e8819b1c4b8b8e0d8501b29e570d8970a455be807cb8584920ed19f31e409ccf"
	Aug 13 21:11:00 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:11:00.033575    5537 scope.go:110] "RemoveContainer" containerID="2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0"
	Aug 13 21:11:00 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:00.033825    5537 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-jq4mn_kubernetes-dashboard(1b135bae-5a0b-452f-ac13-77578d4f5d7b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-jq4mn" podUID=1b135bae-5a0b-452f-ac13-77578d4f5d7b
	Aug 13 21:11:01 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:11:01.051765    5537 scope.go:110] "RemoveContainer" containerID="2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0"
	Aug 13 21:11:01 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:01.057414    5537 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-jq4mn_kubernetes-dashboard(1b135bae-5a0b-452f-ac13-77578d4f5d7b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-jq4mn" podUID=1b135bae-5a0b-452f-ac13-77578d4f5d7b
	Aug 13 21:11:03 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:03.183553    5537 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:11:03 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:03.183593    5537 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:11:03 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:03.183713    5537 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9qnsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-sjf7l_kube-system(1a8eb8de-eb5b-4305-9a3c-0f560914ed99): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:03 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:03.184859    5537 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-sjf7l" podUID=1a8eb8de-eb5b-4305-9a3c-0f560914ed99
	Aug 13 21:11:07 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:11:07.365736    5537 scope.go:110] "RemoveContainer" containerID="2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0"
	Aug 13 21:11:07 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:07.366383    5537 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-jq4mn_kubernetes-dashboard(1b135bae-5a0b-452f-ac13-77578d4f5d7b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-jq4mn" podUID=1b135bae-5a0b-452f-ac13-77578d4f5d7b
	Aug 13 21:11:14 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:14.103408    5537 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-sjf7l" podUID=1a8eb8de-eb5b-4305-9a3c-0f560914ed99
	Aug 13 21:11:15 no-preload-20210813205915-30853 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:11:15 no-preload-20210813205915-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:11:15 no-preload-20210813205915-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6] <==
	* 2021/08/13 21:11:03 Starting overwatch
	2021/08/13 21:11:03 Using namespace: kubernetes-dashboard
	2021/08/13 21:11:03 Using in-cluster config to connect to apiserver
	2021/08/13 21:11:03 Using secret token for csrf signing
	2021/08/13 21:11:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:11:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:11:03 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/13 21:11:03 Generating JWE encryption key
	2021/08/13 21:11:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:11:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:11:03 Initializing JWE encryption key from synchronized object
	2021/08/13 21:11:03 Creating in-cluster Sidecar client
	2021/08/13 21:11:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:11:03 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf] <==
	* I0813 21:10:50.038904       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 21:10:50.065326       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 21:10:50.065697       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 21:10:50.094327       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 21:10:50.096537       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20210813205915-30853_9e80c5ac-1545-45dc-a7ce-f7e1c00875a2!
	I0813 21:10:50.120317       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea4aeaf6-3f36-47b2-a3e5-385b27615b0f", APIVersion:"v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20210813205915-30853_9e80c5ac-1545-45dc-a7ce-f7e1c00875a2 became leader
	I0813 21:10:50.211882       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20210813205915-30853_9e80c5ac-1545-45dc-a7ce-f7e1c00875a2!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210813205915-30853 -n no-preload-20210813205915-30853
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210813205915-30853 -n no-preload-20210813205915-30853: exit status 2 (259.921989ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context no-preload-20210813205915-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-sjf7l
helpers_test.go:273: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context no-preload-20210813205915-30853 describe pod metrics-server-7c784ccb57-sjf7l
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20210813205915-30853 describe pod metrics-server-7c784ccb57-sjf7l: exit status 1 (70.363191ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-sjf7l" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context no-preload-20210813205915-30853 describe pod metrics-server-7c784ccb57-sjf7l: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813205915-30853 -n no-preload-20210813205915-30853
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813205915-30853 -n no-preload-20210813205915-30853: exit status 2 (254.535917ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20210813205915-30853 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p no-preload-20210813205915-30853 logs -n 25: (1.226321879s)
helpers_test.go:253: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:27 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:28 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:32 UTC | Fri, 13 Aug 2021 21:03:32 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:23 UTC | Fri, 13 Aug 2021 21:08:32 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:08:42 UTC | Fri, 13 Aug 2021 21:08:43 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:00 UTC | Fri, 13 Aug 2021 21:08:52 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                         |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:02 UTC | Fri, 13 Aug 2021 21:09:02 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205823-30853                       | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:05 UTC | Fri, 13 Aug 2021 21:09:06 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205823-30853                       | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:07 UTC | Fri, 13 Aug 2021 21:09:09 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:09 UTC | Fri, 13 Aug 2021 21:09:10 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:09:10 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:09:11 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:11 UTC | Fri, 13 Aug 2021 21:09:11 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:10:25 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:36 UTC | Fri, 13 Aug 2021 21:10:36 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813210910-30853 --memory=2200           | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:10:38 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:38 UTC | Fri, 13 Aug 2021 21:10:39 UTC |
	|         | newest-cni-20210813210910-30853                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813210102-30853            | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:38 UTC | Fri, 13 Aug 2021 21:10:39 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813210102-30853            | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:40 UTC | Fri, 13 Aug 2021 21:10:41 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:42 UTC | Fri, 13 Aug 2021 21:10:43 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:43 UTC | Fri, 13 Aug 2021 21:10:43 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:32 UTC | Fri, 13 Aug 2021 21:10:58 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:14 UTC | Fri, 13 Aug 2021 21:11:14 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | no-preload-20210813205915-30853                            | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:17 UTC | Fri, 13 Aug 2021 21:11:18 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:09:10
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:09:10.673379   12791 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:09:10.673452   12791 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:09:10.673457   12791 out.go:311] Setting ErrFile to fd 2...
	I0813 21:09:10.673460   12791 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:09:10.673589   12791 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:09:10.673842   12791 out.go:305] Setting JSON to false
	I0813 21:09:10.710967   12791 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":10313,"bootTime":1628878638,"procs":196,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:09:10.711108   12791 start.go:121] virtualization: kvm guest
	I0813 21:09:10.714392   12791 out.go:177] * [newest-cni-20210813210910-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:09:10.716013   12791 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:09:10.714549   12791 notify.go:169] Checking for updates...
	I0813 21:09:10.717634   12791 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:09:10.719077   12791 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:09:10.720797   12791 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:09:10.721401   12791 config.go:177] Loaded profile config "default-k8s-different-port-20210813210102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:09:10.721555   12791 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:09:10.721780   12791 config.go:177] Loaded profile config "old-k8s-version-20210813205823-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 21:09:10.721849   12791 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:09:10.756752   12791 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 21:09:10.756780   12791 start.go:278] selected driver: kvm2
	I0813 21:09:10.756787   12791 start.go:751] validating driver "kvm2" against <nil>
	I0813 21:09:10.756803   12791 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:09:10.758053   12791 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:09:10.758234   12791 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:09:10.769742   12791 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:09:10.769793   12791 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	W0813 21:09:10.769818   12791 out.go:242] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0813 21:09:10.769965   12791 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 21:09:10.769992   12791 cni.go:93] Creating CNI manager for ""
	I0813 21:09:10.769999   12791 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:09:10.770006   12791 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 21:09:10.770016   12791 start_flags.go:277] config:
	{Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:09:10.770113   12791 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:09:10.772194   12791 out.go:177] * Starting control plane node newest-cni-20210813210910-30853 in cluster newest-cni-20210813210910-30853
	I0813 21:09:10.772225   12791 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:09:10.772278   12791 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 21:09:10.772313   12791 cache.go:56] Caching tarball of preloaded images
	I0813 21:09:10.772443   12791 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 21:09:10.772466   12791 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0813 21:09:10.772616   12791 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json ...
	I0813 21:09:10.772647   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json: {Name:mka76415e48e0242b5a1559d0d7199fac2bfb5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:10.772840   12791 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:09:10.772878   12791 start.go:313] acquiring machines lock for newest-cni-20210813210910-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:09:10.772950   12791 start.go:317] acquired machines lock for "newest-cni-20210813210910-30853" in 46.661µs
	I0813 21:09:10.772977   12791 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVers
ion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:09:10.773061   12791 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 21:09:07.914518   11447 pod_ready.go:102] pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:08.406958   11447 pod_ready.go:81] duration metric: took 4m0.40016385s waiting for pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace to be "Ready" ...
	E0813 21:09:08.406984   11447 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-xfj59" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:09:08.407011   11447 pod_ready.go:38] duration metric: took 4m38.843620331s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:09:08.407047   11447 kubeadm.go:604] restartCluster took 5m2.813329014s
	W0813 21:09:08.407209   11447 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:09:08.407246   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:09:07.902231   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.401905   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.775162   12791 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 21:09:10.775296   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:09:10.775358   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:09:10.786479   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0813 21:09:10.786930   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:09:10.787562   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:09:10.787587   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:09:10.788015   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:09:10.788228   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:10.788398   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:10.788591   12791 start.go:160] libmachine.API.Create for "newest-cni-20210813210910-30853" (driver="kvm2")
	I0813 21:09:10.788640   12791 client.go:168] LocalClient.Create starting
	I0813 21:09:10.788684   12791 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 21:09:10.788746   12791 main.go:130] libmachine: Decoding PEM data...
	I0813 21:09:10.788770   12791 main.go:130] libmachine: Parsing certificate...
	I0813 21:09:10.788912   12791 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 21:09:10.788937   12791 main.go:130] libmachine: Decoding PEM data...
	I0813 21:09:10.788956   12791 main.go:130] libmachine: Parsing certificate...
	I0813 21:09:10.789012   12791 main.go:130] libmachine: Running pre-create checks...
	I0813 21:09:10.789029   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .PreCreateCheck
	I0813 21:09:10.789351   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetConfigRaw
	I0813 21:09:10.789790   12791 main.go:130] libmachine: Creating machine...
	I0813 21:09:10.789804   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Create
	I0813 21:09:10.789932   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Creating KVM machine...
	I0813 21:09:10.792752   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found existing default KVM network
	I0813 21:09:10.794412   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:10.794251   12815 network.go:288] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc000010800] misses:0}
	I0813 21:09:10.794453   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:10.794342   12815 network.go:235] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 21:09:10.817502   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | trying to create private KVM network mk-newest-cni-20210813210910-30853 192.168.39.0/24...
	I0813 21:09:11.103452   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | private KVM network mk-newest-cni-20210813210910-30853 192.168.39.0/24 created
	I0813 21:09:11.103485   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.103368   12815 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:09:11.103509   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853 ...
	I0813 21:09:11.103562   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 21:09:11.103608   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 21:09:11.320966   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.320858   12815 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa...
	I0813 21:09:11.459093   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.458976   12815 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/newest-cni-20210813210910-30853.rawdisk...
	I0813 21:09:11.459148   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Writing magic tar header
	I0813 21:09:11.459177   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Writing SSH key tar header
	I0813 21:09:11.459194   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.459075   12815 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853 ...
	I0813 21:09:11.459223   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853 (perms=drwx------)
	I0813 21:09:11.459288   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853
	I0813 21:09:11.459321   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 21:09:11.459350   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 21:09:11.459373   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 21:09:11.459391   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 21:09:11.459409   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:09:11.459426   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337
	I0813 21:09:11.459444   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 21:09:11.459464   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home/jenkins
	I0813 21:09:11.459485   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 21:09:11.459500   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Checking permissions on dir: /home
	I0813 21:09:11.459515   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 21:09:11.459528   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Skipping /home - not owner
	I0813 21:09:11.459546   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Creating domain...
	I0813 21:09:11.488427   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:ee:fb:7e in network default
	I0813 21:09:11.489099   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring networks are active...
	I0813 21:09:11.489140   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:11.491476   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring network default is active
	I0813 21:09:11.491829   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring network mk-newest-cni-20210813210910-30853 is active
	I0813 21:09:11.492457   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Getting domain xml...
	I0813 21:09:11.494775   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Creating domain...
	I0813 21:09:11.955786   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Waiting to get IP...
	I0813 21:09:11.956670   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:11.957315   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:11.957341   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:11.957262   12815 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 21:09:12.221730   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.222307   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.222349   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:12.222212   12815 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 21:09:12.604662   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.605164   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:12.605191   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:12.605108   12815 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 21:09:13.029701   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.030156   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.030218   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:13.030122   12815 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 21:09:13.504659   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.505143   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:13.505173   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:13.505105   12815 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 21:09:14.093824   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.094412   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.094446   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:14.094345   12815 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 21:09:14.929917   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.930509   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:14.930535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:14.930469   12815 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 21:09:12.902877   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:14.903637   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:15.678952   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:15.679492   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:15.679571   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:15.679462   12815 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 21:09:16.668007   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:16.668572   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:16.668609   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:16.668495   12815 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 21:09:17.859819   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:17.860363   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:17.860390   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:17.860285   12815 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 21:09:19.539855   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:19.540503   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:19.540530   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:19.540442   12815 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 21:09:17.403580   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:19.901370   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:21.902145   11600 pod_ready.go:102] pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:21.887601   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:21.888130   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find current IP address of domain newest-cni-20210813210910-30853 in network mk-newest-cni-20210813210910-30853
	I0813 21:09:21.888151   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | I0813 21:09:21.888074   12815 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 21:09:25.255905   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.256490   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Found IP for machine: 192.168.39.210
	I0813 21:09:25.256524   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has current primary IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.256535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Reserving static IP address...
	I0813 21:09:25.256915   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | unable to find host DHCP lease matching {name: "newest-cni-20210813210910-30853", mac: "52:54:00:22:60:9f", ip: "192.168.39.210"} in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.303282   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Reserved static IP address: 192.168.39.210
	I0813 21:09:25.303341   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Getting to WaitForSSH function...
	I0813 21:09:25.303352   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Waiting for SSH to be available...
	I0813 21:09:25.309055   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.309442   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:minikube Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.309474   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.309627   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using SSH client type: external
	I0813 21:09:25.309651   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa (-rw-------)
	I0813 21:09:25.309698   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:09:25.309731   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | About to run SSH command:
	I0813 21:09:25.309744   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | exit 0
	I0813 21:09:25.467104   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 21:09:25.467603   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) KVM machine creation complete!
	I0813 21:09:25.467679   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetConfigRaw
	I0813 21:09:25.468310   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:25.468513   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:25.468691   12791 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 21:09:25.468710   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:09:25.471536   12791 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 21:09:25.471555   12791 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 21:09:25.471565   12791 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 21:09:25.471575   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.476123   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.476450   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.476479   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.476604   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.476755   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.476933   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.477105   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.477284   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.477466   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.477480   12791 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 21:09:25.594161   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:09:25.594190   12791 main.go:130] libmachine: Detecting the provisioner...
	I0813 21:09:25.594203   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.600130   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.600531   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.600564   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.600765   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.600974   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.601151   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.601303   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.601456   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.601620   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.601635   12791 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 21:09:22.392237   11600 pod_ready.go:81] duration metric: took 4m0.007094721s waiting for pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace to be "Ready" ...
	E0813 21:09:22.392261   11600 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-rfp5v" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:09:22.392283   11600 pod_ready.go:38] duration metric: took 4m14.135839126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:09:22.392312   11600 kubeadm.go:604] restartCluster took 4m52.280117973s
	W0813 21:09:22.392448   11600 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:09:22.392485   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 21:09:25.715874   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 21:09:25.715991   12791 main.go:130] libmachine: found compatible host: buildroot
	I0813 21:09:25.716007   12791 main.go:130] libmachine: Provisioning with buildroot...
	I0813 21:09:25.716023   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:25.716285   12791 buildroot.go:166] provisioning hostname "newest-cni-20210813210910-30853"
	I0813 21:09:25.716311   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:25.716475   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.722141   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.722535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.722575   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.722814   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.723002   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.723169   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.723323   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.723458   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.723611   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.723626   12791 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210813210910-30853 && echo "newest-cni-20210813210910-30853" | sudo tee /etc/hostname
	I0813 21:09:25.855120   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210813210910-30853
	
	I0813 21:09:25.855151   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.861182   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.861544   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.861567   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.861715   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:25.861922   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.862087   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:25.862214   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:25.862344   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:25.862548   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:25.862577   12791 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210813210910-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210813210910-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210813210910-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:09:25.982023   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:09:25.982082   12791 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:09:25.982118   12791 buildroot.go:174] setting up certificates
	I0813 21:09:25.982134   12791 provision.go:83] configureAuth start
	I0813 21:09:25.982150   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:09:25.982399   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:25.988009   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.988348   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.988380   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.988535   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:25.993579   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.993994   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:25.994024   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:25.994249   12791 provision.go:138] copyHostCerts
	I0813 21:09:25.994336   12791 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:09:25.994347   12791 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:09:25.994396   12791 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:09:25.994483   12791 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:09:25.994497   12791 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:09:25.994532   12791 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:09:25.994643   12791 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:09:25.994656   12791 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:09:25.994688   12791 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 21:09:25.994760   12791 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210813210910-30853 san=[192.168.39.210 192.168.39.210 localhost 127.0.0.1 minikube newest-cni-20210813210910-30853]
	I0813 21:09:26.305745   12791 provision.go:172] copyRemoteCerts
	I0813 21:09:26.305810   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:09:26.305840   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:26.311502   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.311880   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:26.311916   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.312018   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:26.312266   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:26.312474   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:26.312635   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:26.397917   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:09:26.415261   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 21:09:26.432018   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:09:26.448392   12791 provision.go:86] duration metric: configureAuth took 466.244488ms
	I0813 21:09:26.448413   12791 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:09:26.448550   12791 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:09:26.448647   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:26.453886   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.454235   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:26.454267   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:26.454404   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:26.454578   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:26.454719   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:26.454882   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:26.455020   12791 main.go:130] libmachine: Using SSH client type: native
	I0813 21:09:26.455171   12791 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:09:26.455193   12791 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 21:09:27.218253   12791 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 21:09:27.218291   12791 main.go:130] libmachine: Checking connection to Docker...
	I0813 21:09:27.218304   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetURL
	I0813 21:09:27.220942   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using libvirt version 3000000
	I0813 21:09:27.225565   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.225908   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.225955   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.226230   12791 main.go:130] libmachine: Docker is up and running!
	I0813 21:09:27.226255   12791 main.go:130] libmachine: Reticulating splines...
	I0813 21:09:27.226262   12791 client.go:171] LocalClient.Create took 16.437611332s
	I0813 21:09:27.226308   12791 start.go:168] duration metric: libmachine.API.Create for "newest-cni-20210813210910-30853" took 16.437720973s
	I0813 21:09:27.226319   12791 start.go:267] post-start starting for "newest-cni-20210813210910-30853" (driver="kvm2")
	I0813 21:09:27.226323   12791 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:09:27.226339   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.226579   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:09:27.226605   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:27.231167   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.231514   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.231541   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.231723   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:27.231888   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:27.232115   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:27.232258   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:27.318810   12791 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:09:27.324679   12791 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:09:27.324708   12791 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:09:27.324766   12791 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:09:27.324867   12791 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 21:09:27.324993   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:09:27.332665   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:09:27.349495   12791 start.go:270] post-start completed in 123.164223ms
	I0813 21:09:27.349583   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetConfigRaw
	I0813 21:09:27.350235   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:27.356173   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.356503   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.356569   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.356804   12791 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json ...
	I0813 21:09:27.357034   12791 start.go:129] duration metric: createHost completed in 16.583958717s
	I0813 21:09:27.357054   12791 start.go:80] releasing machines lock for "newest-cni-20210813210910-30853", held for 16.584089955s
	I0813 21:09:27.357097   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.357282   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:27.361779   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.362087   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.362122   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.362275   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.362445   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.362924   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:09:27.363133   12791 ssh_runner.go:149] Run: systemctl --version
	I0813 21:09:27.363160   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:27.363219   12791 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:09:27.363264   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:09:27.368253   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.368519   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.368556   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.368628   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:27.368784   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:27.368919   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:27.369055   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:27.369149   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.369521   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:27.369556   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:27.369717   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:09:27.369863   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:09:27.369979   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:09:27.370099   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:09:27.452425   12791 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:09:27.452543   12791 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:09:31.448706   12791 ssh_runner.go:189] Completed: sudo crictl images --output json: (3.996135455s)
	I0813 21:09:31.448838   12791 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 21:09:31.448901   12791 ssh_runner.go:149] Run: which lz4
	I0813 21:09:31.453326   12791 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:09:31.458022   12791 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:09:31.458058   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (590981257 bytes)
	I0813 21:09:34.040840   12791 crio.go:362] Took 2.587545 seconds to copy over tarball
	I0813 21:09:34.040960   12791 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:09:39.662568   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.255292287s)
	I0813 21:09:39.662654   11447 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:09:39.679831   11447 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:09:39.679928   11447 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:09:39.725756   11447 cri.go:76] found id: ""
	I0813 21:09:39.725838   11447 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:09:39.734367   11447 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:09:39.743419   11447 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:09:39.743465   11447 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:09:39.046178   12791 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.005181631s)
	I0813 21:09:39.046212   12791 crio.go:369] Took 5.005343 seconds t extract the tarball
	I0813 21:09:39.046225   12791 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:09:39.096327   12791 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 21:09:39.108664   12791 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 21:09:39.120896   12791 docker.go:153] disabling docker service ...
	I0813 21:09:39.120956   12791 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:09:39.132781   12791 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:09:39.144772   12791 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:09:39.291366   12791 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:09:39.473805   12791 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:09:39.488990   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:09:39.508851   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 21:09:39.519787   12791 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:09:39.527766   12791 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:09:39.527827   12791 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:09:39.549292   12791 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:09:39.557653   12791 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:09:39.695889   12791 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 21:09:39.852538   12791 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 21:09:39.852673   12791 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 21:09:39.865143   12791 start.go:413] Will wait 60s for crictl version
	I0813 21:09:39.865219   12791 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:09:39.902891   12791 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 21:09:39.902976   12791 ssh_runner.go:149] Run: crio --version
	I0813 21:09:40.146285   12791 ssh_runner.go:149] Run: crio --version
	I0813 21:09:44.881949   11447 out.go:204]   - Generating certificates and keys ...
	I0813 21:09:44.881970   12791 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.2 ...
	I0813 21:09:44.882025   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:09:44.888023   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:44.888330   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:09:44.888361   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:09:44.888544   12791 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 21:09:44.893252   12791 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:09:44.903812   12791 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/client.crt
	I0813 21:09:44.903997   12791 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/client.key
	I0813 21:09:44.922443   12791 out.go:177]   - kubelet.network-plugin=cni
	I0813 21:09:44.923908   12791 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0813 21:09:44.923979   12791 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:09:44.924054   12791 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:09:45.004762   12791 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:09:45.004791   12791 crio.go:333] Images already preloaded, skipping extraction
	I0813 21:09:45.004856   12791 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:09:45.042121   12791 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:09:45.042150   12791 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:09:45.042226   12791 ssh_runner.go:149] Run: crio config
	I0813 21:09:45.253009   12791 cni.go:93] Creating CNI manager for ""
	I0813 21:09:45.253045   12791 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:09:45.253059   12791 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0813 21:09:45.253078   12791 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.39.210 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210813210910-30853 NodeName:newest-cni-20210813210910-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-el
ect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.210 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:09:45.253242   12791 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "newest-cni-20210813210910-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:09:45.253382   12791 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210813210910-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.210 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:09:45.253451   12791 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 21:09:45.260928   12791 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:09:45.260983   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:09:45.268144   12791 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (554 bytes)
	I0813 21:09:45.280833   12791 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 21:09:45.293352   12791 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I0813 21:09:45.306281   12791 ssh_runner.go:149] Run: grep 192.168.39.210	control-plane.minikube.internal$ /etc/hosts
	I0813 21:09:45.310235   12791 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:09:45.322126   12791 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853 for IP: 192.168.39.210
	I0813 21:09:45.322191   12791 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:09:45.322212   12791 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:09:45.322281   12791 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/client.key
	I0813 21:09:45.322307   12791 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a
	I0813 21:09:45.322319   12791 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a with IP's: [192.168.39.210 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 21:09:45.521630   12791 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a ...
	I0813 21:09:45.521662   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a: {Name:mk4aa4db18dba264c364eea6455fafca6541c687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.521857   12791 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a ...
	I0813 21:09:45.521869   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a: {Name:mk4bafabda5b550064b81d0be7e6d613e7cbe853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.521953   12791 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt.6213553a -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt
	I0813 21:09:45.522012   12791 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key
	I0813 21:09:45.522063   12791 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key
	I0813 21:09:45.522071   12791 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt with IP's: []
	I0813 21:09:45.572044   12791 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt ...
	I0813 21:09:45.572072   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt: {Name:mk46480092ca0ddfdbb22ced231c8543e6fadff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.572258   12791 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key ...
	I0813 21:09:45.572270   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key: {Name:mk2ff838c1ce904cf05995003085f2c953d17b54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:09:45.572443   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 21:09:45.572486   12791 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 21:09:45.572497   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:09:45.572520   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:09:45.572550   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:09:45.572575   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 21:09:45.572620   12791 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:09:45.573530   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:09:45.591406   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:09:45.607675   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:09:45.623382   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 21:09:44.885025   11447 out.go:204]   - Booting up control plane ...
	I0813 21:09:45.638600   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:09:45.655496   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 21:09:45.672748   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:09:45.690934   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 21:09:45.709394   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:09:45.727886   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 21:09:45.747118   12791 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 21:09:45.764623   12791 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:09:45.776487   12791 ssh_runner.go:149] Run: openssl version
	I0813 21:09:45.782506   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:09:45.790602   12791 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:09:45.795798   12791 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:09:45.795845   12791 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:09:45.801633   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:09:45.809459   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 21:09:45.817086   12791 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 21:09:45.821525   12791 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 21:09:45.821581   12791 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 21:09:45.827427   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 21:09:45.835137   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 21:09:45.843222   12791 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 21:09:45.848030   12791 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 21:09:45.848070   12791 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 21:09:45.854871   12791 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:09:45.863382   12791 kubeadm.go:390] StartCluster: {Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.
0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:09:45.863483   12791 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 21:09:45.863550   12791 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:09:45.897179   12791 cri.go:76] found id: ""
	I0813 21:09:45.897265   12791 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:09:45.904791   12791 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:09:45.911599   12791 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:09:45.918334   12791 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:09:45.918383   12791 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:09:57.982116   11447 out.go:204]   - Configuring RBAC rules ...
	I0813 21:09:58.584325   11447 cni.go:93] Creating CNI manager for ""
	I0813 21:09:58.584349   11447 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:10:00.460094   12791 out.go:204]   - Generating certificates and keys ...
	I0813 21:09:58.586084   11447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:09:58.586145   11447 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:09:58.603522   11447 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:09:58.627002   11447 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:09:58.627101   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:09:58.627103   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=default-k8s-different-port-20210813210102-30853 minikube.k8s.io/updated_at=2021_08_13T21_09_58_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:09:59.050930   11447 ops.go:34] apiserver oom_adj: -16
	I0813 21:09:59.051059   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:09:59.695711   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:00.195937   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:00.695450   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:03.003248   12791 out.go:204]   - Booting up control plane ...
	I0813 21:10:01.195565   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:01.695971   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:02.195512   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:02.696069   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:03.195960   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:03.696007   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:04.195636   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:04.695628   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:05.195701   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:05.695999   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.044352   11600 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (46.651842681s)
	I0813 21:10:09.044429   11600 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:10:09.059478   11600 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:10:09.059553   11600 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:10:09.093284   11600 cri.go:76] found id: ""
	I0813 21:10:09.093381   11600 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:10:09.100568   11600 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:10:09.107226   11600 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:10:09.107269   11600 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:10:06.195800   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:06.695240   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:07.195746   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:07.695213   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:08.195912   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:08.695965   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.195595   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.696049   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:10.195131   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:10.695293   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:09.730908   11600 out.go:204]   - Generating certificates and keys ...
	I0813 21:10:11.196059   11447 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:11.534135   11447 kubeadm.go:985] duration metric: took 12.907094032s to wait for elevateKubeSystemPrivileges.
	I0813 21:10:11.534170   11447 kubeadm.go:392] StartCluster complete in 6m5.98958255s
	I0813 21:10:11.534191   11447 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:11.534316   11447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:10:11.535601   11447 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:12.110091   11447 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210813210102-30853" rescaled to 1
	I0813 21:10:12.110179   11447 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.136 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:10:12.112084   11447 out.go:177] * Verifying Kubernetes components...
	I0813 21:10:12.110253   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:10:12.112158   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:12.110569   11447 config.go:177] Loaded profile config "default-k8s-different-port-20210813210102-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 21:10:12.110623   11447 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:10:12.112334   11447 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112337   11447 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112351   11447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112358   11447 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.112366   11447 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:10:12.112400   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.112736   11447 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112752   11447 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.112760   11447 addons.go:147] addon metrics-server should already be in state true
	I0813 21:10:12.112763   11447 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:12.112774   11447 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.112783   11447 addons.go:147] addon dashboard should already be in state true
	I0813 21:10:12.112784   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.112802   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.112857   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.112894   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.112750   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.113192   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.113201   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.113224   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.113233   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.113340   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.140644   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41549
	I0813 21:10:12.140642   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35329
	I0813 21:10:12.140661   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0813 21:10:12.141348   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.141465   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.141541   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.141935   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.141953   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.142074   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.142081   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.142089   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.142093   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.142438   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.142486   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.143136   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.143176   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.143388   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.143929   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.143972   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.144251   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.144301   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0813 21:10:12.144729   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.145337   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.145357   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.145698   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.146348   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.146380   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.161135   11447 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210813210102-30853"
	W0813 21:10:12.161159   11447 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:10:12.161188   11447 host.go:66] Checking if "default-k8s-different-port-20210813210102-30853" exists ...
	I0813 21:10:12.161594   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.161636   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.161853   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34203
	I0813 21:10:12.161878   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43143
	I0813 21:10:12.162218   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.162412   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.162720   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.162740   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.162900   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.162921   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.163146   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.163294   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.166669   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.169181   11447 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:10:12.169252   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:10:12.169267   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:10:12.167214   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.169288   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.169571   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.173910   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.175978   11447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:10:12.176070   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0813 21:10:12.176093   11447 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:12.176103   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:10:12.176120   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.175639   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.176186   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.176216   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.175916   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39465
	I0813 21:10:12.176232   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:12.176420   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.176469   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.176549   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.176672   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.176869   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.177027   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.177041   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.177293   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.177308   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.177366   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.177663   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.177782   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.178349   11447 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:12.178391   11447 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:12.181885   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.183919   11447 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:10:12.182804   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.183976   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.184012   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.183416   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:10.812498   11600 out.go:204]   - Booting up control plane ...
	I0813 21:10:12.186349   11447 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:10:12.186413   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:10:12.184193   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.186427   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:10:12.186446   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.186621   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.186808   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.190702   11447 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35615
	I0813 21:10:12.191063   11447 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:12.191556   11447 main.go:130] libmachine: Using API Version  1
	I0813 21:10:12.191584   11447 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:12.191977   11447 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:12.192165   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetState
	I0813 21:10:12.192357   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.192757   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.192786   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.192929   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:12.193084   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.193242   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.193363   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.195129   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .DriverName
	I0813 21:10:12.195341   11447 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:12.195358   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:10:12.195378   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHHostname
	I0813 21:10:12.200908   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.201282   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:ca:98", ip: ""} in network mk-default-k8s-different-port-20210813210102-30853: {Iface:virbr2 ExpiryTime:2021-08-13 22:03:42 +0000 UTC Type:0 Mac:52:54:00:37:ca:98 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:default-k8s-different-port-20210813210102-30853 Clientid:01:52:54:00:37:ca:98}
	I0813 21:10:12.201309   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | domain default-k8s-different-port-20210813210102-30853 has defined IP address 192.168.50.136 and MAC address 52:54:00:37:ca:98 in network mk-default-k8s-different-port-20210813210102-30853
	I0813 21:10:12.201443   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHPort
	I0813 21:10:12.201571   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHKeyPath
	I0813 21:10:12.201711   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .GetSSHUsername
	I0813 21:10:12.201825   11447 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210102-30853/id_rsa Username:docker}
	I0813 21:10:12.425248   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:12.468978   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:10:12.469021   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:10:12.494701   11447 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210813210102-30853" to be "Ready" ...
	I0813 21:10:12.495206   11447 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:10:12.499329   11447 node_ready.go:49] node "default-k8s-different-port-20210813210102-30853" has status "Ready":"True"
	I0813 21:10:12.499359   11447 node_ready.go:38] duration metric: took 4.621451ms waiting for node "default-k8s-different-port-20210813210102-30853" to be "Ready" ...
	I0813 21:10:12.499373   11447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:12.499757   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:12.510602   11447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:12.610525   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:10:12.610562   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:10:12.656245   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:10:12.656276   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:10:12.772157   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:10:12.772191   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:10:12.815178   11447 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:10:12.815208   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:10:12.932243   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:10:12.932272   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:10:12.992201   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:10:13.151328   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:10:13.151358   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:10:13.272742   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:10:13.272771   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:10:13.504799   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:10:13.504829   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:10:13.711447   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:10:13.711476   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:10:13.833690   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:10:13.833722   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:10:13.907807   11447 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:10:13.907839   11447 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:10:14.189833   11447 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:10:14.535190   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:15.411080   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.985781369s)
	I0813 21:10:15.411145   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.411139   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.91134851s)
	I0813 21:10:15.411163   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.411180   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.411211   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.411243   11447 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.916004514s)
	I0813 21:10:15.411301   11447 start.go:728] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS
	I0813 21:10:15.412648   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.412658   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.412711   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.412721   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.412731   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.412738   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.412765   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.412779   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.412797   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.412740   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.413131   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.413156   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.413170   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.413203   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.413207   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:15.413222   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:15.413245   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:15.413261   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:15.413535   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:15.413550   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:16.138255   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.145991542s)
	I0813 21:10:16.138325   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:16.138339   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:16.138639   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:16.138660   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:16.138663   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:16.138692   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:16.138702   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:16.138996   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:16.139040   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:16.139056   11447 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210813210102-30853"
	I0813 21:10:16.138998   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:16.609336   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:17.060932   11447 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.871038717s)
	I0813 21:10:17.061005   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:17.061023   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:17.061327   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:17.061348   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:17.061358   11447 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:17.061349   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:17.061370   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) Calling .Close
	I0813 21:10:17.061708   11447 main.go:130] libmachine: (default-k8s-different-port-20210813210102-30853) DBG | Closing plugin on server side
	I0813 21:10:17.061715   11447 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:17.061777   11447 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:17.064437   11447 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 21:10:17.064471   11447 addons.go:344] enableAddons completed in 4.953854482s
	I0813 21:10:19.033855   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:21.685414   12791 out.go:204]   - Configuring RBAC rules ...
	I0813 21:10:22.697730   12791 cni.go:93] Creating CNI manager for ""
	I0813 21:10:22.697758   12791 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:10:22.699669   12791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:10:22.699748   12791 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:10:22.711081   12791 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:10:22.740715   12791 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:10:22.740845   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:22.740928   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=newest-cni-20210813210910-30853 minikube.k8s.io/updated_at=2021_08_13T21_10_22_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:23.063141   12791 ops.go:34] apiserver oom_adj: -16
	I0813 21:10:23.063228   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:23.680146   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:24.179617   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:24.680324   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:25.180108   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:21.530978   11447 pod_ready.go:102] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:22.032299   11447 pod_ready.go:92] pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:22.032329   11447 pod_ready.go:81] duration metric: took 9.521694058s waiting for pod "coredns-558bd4d5db-jphw4" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:22.032343   11447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.052078   11447 pod_ready.go:102] pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:24.548192   11447 pod_ready.go:97] error getting pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-xmqhp" not found
	I0813 21:10:24.548233   11447 pod_ready.go:81] duration metric: took 2.515881289s waiting for pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace to be "Ready" ...
	E0813 21:10:24.548247   11447 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-xmqhp" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-xmqhp" not found
	I0813 21:10:24.548257   11447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.554129   11447 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.554154   11447 pod_ready.go:81] duration metric: took 5.887843ms waiting for pod "etcd-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.554167   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.559840   11447 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.559859   11447 pod_ready.go:81] duration metric: took 5.68331ms waiting for pod "kube-apiserver-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.559871   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.565198   11447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.565217   11447 pod_ready.go:81] duration metric: took 5.336694ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.565226   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jn56d" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.571811   11447 pod_ready.go:92] pod "kube-proxy-jn56d" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.571827   11447 pod_ready.go:81] duration metric: took 6.594619ms waiting for pod "kube-proxy-jn56d" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.571837   11447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.749142   11447 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:24.749167   11447 pod_ready.go:81] duration metric: took 177.31996ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813210102-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:24.749179   11447 pod_ready.go:38] duration metric: took 12.249789309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:24.749199   11447 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:10:24.749257   11447 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:10:24.784712   11447 api_server.go:70] duration metric: took 12.674498021s to wait for apiserver process to appear ...
	I0813 21:10:24.784740   11447 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:10:24.784753   11447 api_server.go:239] Checking apiserver healthz at https://192.168.50.136:8444/healthz ...
	I0813 21:10:24.793567   11447 api_server.go:265] https://192.168.50.136:8444/healthz returned 200:
	ok
	I0813 21:10:24.794892   11447 api_server.go:139] control plane version: v1.21.3
	I0813 21:10:24.794914   11447 api_server.go:129] duration metric: took 10.167822ms to wait for apiserver health ...
	I0813 21:10:24.794925   11447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:10:24.951664   11447 system_pods.go:59] 8 kube-system pods found
	I0813 21:10:24.951701   11447 system_pods.go:61] "coredns-558bd4d5db-jphw4" [057e9392-38dd-4c71-a09d-83ae9055347e] Running
	I0813 21:10:24.951709   11447 system_pods.go:61] "etcd-default-k8s-different-port-20210813210102-30853" [663c755b-7d29-4114-a1ff-2216c7e74716] Running
	I0813 21:10:24.951717   11447 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813210102-30853" [74f98aff-af48-4328-bee1-8f02162674db] Running
	I0813 21:10:24.951726   11447 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813210102-30853" [77d2d0a4-2421-4895-af76-15c395e6c465] Running
	I0813 21:10:24.951731   11447 system_pods.go:61] "kube-proxy-jn56d" [bf9beff3-8f15-4901-9886-ef5f0d821182] Running
	I0813 21:10:24.951736   11447 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813210102-30853" [21fdb84c-27b1-4592-8914-bf32c1b56ecf] Running
	I0813 21:10:24.951745   11447 system_pods.go:61] "metrics-server-7c784ccb57-cdhkk" [899ed30f-faf1-40e3-9a46-c1ad31aa7f70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:10:24.951753   11447 system_pods.go:61] "storage-provisioner" [3b577536-5550-42ee-a361-275f78e67c9e] Running
	I0813 21:10:24.951765   11447 system_pods.go:74] duration metric: took 156.833527ms to wait for pod list to return data ...
	I0813 21:10:24.951775   11447 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:10:25.148940   11447 default_sa.go:45] found service account: "default"
	I0813 21:10:25.148969   11447 default_sa.go:55] duration metric: took 197.176977ms for default service account to be created ...
	I0813 21:10:25.148984   11447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:10:25.352044   11447 system_pods.go:86] 8 kube-system pods found
	I0813 21:10:25.352084   11447 system_pods.go:89] "coredns-558bd4d5db-jphw4" [057e9392-38dd-4c71-a09d-83ae9055347e] Running
	I0813 21:10:25.352096   11447 system_pods.go:89] "etcd-default-k8s-different-port-20210813210102-30853" [663c755b-7d29-4114-a1ff-2216c7e74716] Running
	I0813 21:10:25.352103   11447 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210813210102-30853" [74f98aff-af48-4328-bee1-8f02162674db] Running
	I0813 21:10:25.352112   11447 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210813210102-30853" [77d2d0a4-2421-4895-af76-15c395e6c465] Running
	I0813 21:10:25.352119   11447 system_pods.go:89] "kube-proxy-jn56d" [bf9beff3-8f15-4901-9886-ef5f0d821182] Running
	I0813 21:10:25.352129   11447 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210813210102-30853" [21fdb84c-27b1-4592-8914-bf32c1b56ecf] Running
	I0813 21:10:25.352141   11447 system_pods.go:89] "metrics-server-7c784ccb57-cdhkk" [899ed30f-faf1-40e3-9a46-c1ad31aa7f70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:10:25.352150   11447 system_pods.go:89] "storage-provisioner" [3b577536-5550-42ee-a361-275f78e67c9e] Running
	I0813 21:10:25.352160   11447 system_pods.go:126] duration metric: took 203.170374ms to wait for k8s-apps to be running ...
	I0813 21:10:25.352177   11447 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:10:25.352232   11447 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:25.366009   11447 system_svc.go:56] duration metric: took 13.82353ms WaitForService to wait for kubelet.
	I0813 21:10:25.366041   11447 kubeadm.go:547] duration metric: took 13.255833147s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:10:25.366078   11447 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:10:25.671992   11447 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:10:25.672026   11447 node_conditions.go:123] node cpu capacity is 2
	I0813 21:10:25.672045   11447 node_conditions.go:105] duration metric: took 305.961488ms to run NodePressure ...
	I0813 21:10:25.672058   11447 start.go:231] waiting for startup goroutines ...
	I0813 21:10:25.741468   11447 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 21:10:25.743555   11447 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210813210102-30853" cluster and "default" namespace by default
	I0813 21:10:29.004104   11600 out.go:204]   - Configuring RBAC rules ...
	I0813 21:10:29.713525   11600 cni.go:93] Creating CNI manager for ""
	I0813 21:10:29.713570   11600 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:10:25.680008   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:26.180477   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:26.680294   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:27.180411   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:27.679956   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:28.179559   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:28.679596   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.179509   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.679704   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:30.180325   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.715719   11600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:10:29.715784   11600 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:10:29.736151   11600 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:10:29.781971   11600 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:10:29.782030   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.782090   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=no-preload-20210813205915-30853 minikube.k8s.io/updated_at=2021_08_13T21_10_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:29.830681   11600 ops.go:34] apiserver oom_adj: -16
	I0813 21:10:30.150647   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:30.779463   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.280355   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.779613   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:30.680059   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.180084   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:31.679975   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:32.179732   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:32.679873   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.179878   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.679567   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.180100   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.679513   12791 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.825619   12791 kubeadm.go:985] duration metric: took 12.084819945s to wait for elevateKubeSystemPrivileges.
	I0813 21:10:34.825653   12791 kubeadm.go:392] StartCluster complete in 48.962278505s
	I0813 21:10:34.825676   12791 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:34.825790   12791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:10:34.827844   12791 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:35.357758   12791 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210813210910-30853" rescaled to 1
	I0813 21:10:35.357830   12791 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:10:35.359667   12791 out.go:177] * Verifying Kubernetes components...
	I0813 21:10:35.357884   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:10:35.357927   12791 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 21:10:35.358131   12791 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:10:35.359798   12791 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210813210910-30853"
	I0813 21:10:35.359818   12791 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:35.359820   12791 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210813210910-30853"
	W0813 21:10:35.359828   12791 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:10:35.359855   12791 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:10:35.359852   12791 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210813210910-30853"
	I0813 21:10:35.359908   12791 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210813210910-30853"
	I0813 21:10:35.360333   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.360381   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.360414   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.360455   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.374986   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42203
	I0813 21:10:35.375050   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0813 21:10:35.375635   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.375910   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.377813   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.377836   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.377912   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.377925   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.378238   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.378810   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.378869   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.379811   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.380004   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:10:35.391384   12791 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210813210910-30853"
	W0813 21:10:35.391410   12791 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:10:35.391438   12791 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:10:35.391832   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.391897   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.391999   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0813 21:10:35.392393   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.392989   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.393014   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.393496   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.393691   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:10:35.397628   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:10:35.400074   12791 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:10:35.400221   12791 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:35.400233   12791 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:10:35.400253   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:10:35.406732   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39751
	I0813 21:10:35.407200   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.407553   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.407703   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.407724   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.408324   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:10:35.408333   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:10:35.408348   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.408363   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.408489   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:10:35.408643   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:10:35.408815   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:10:35.409189   12791 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:35.409266   12791 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:35.424756   12791 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0813 21:10:35.425178   12791 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:35.425688   12791 main.go:130] libmachine: Using API Version  1
	I0813 21:10:35.425717   12791 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:35.426032   12791 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:35.426208   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:10:35.429530   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:10:35.429754   12791 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:35.429775   12791 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:10:35.429797   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:10:35.436000   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.436628   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:10:35.436664   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:10:35.436775   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:10:35.436942   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:10:35.437117   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:10:35.437291   12791 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:10:35.594125   12791 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:32.279420   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:32.780066   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.280227   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:33.779756   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.280100   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:34.779428   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:35.279470   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:35.779478   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:36.279401   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:36.779390   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:35.796621   12791 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:36.020007   12791 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:10:36.022097   12791 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:10:36.022141   12791 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:10:37.953285   12791 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.359113303s)
	I0813 21:10:37.953357   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:37.953374   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:37.953716   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:37.953737   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:37.953747   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:37.953764   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:37.954032   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:37.954047   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.018145   12791 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.221484906s)
	I0813 21:10:38.018195   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:38.018210   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:38.018146   12791 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.995992413s)
	I0813 21:10:38.018276   12791 api_server.go:70] duration metric: took 2.660410949s to wait for apiserver process to appear ...
	I0813 21:10:38.018284   12791 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:10:38.018293   12791 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:10:38.018510   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:38.018529   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.018538   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:38.018547   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:38.018806   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:38.018828   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.018842   12791 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:38.018866   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:10:38.019228   12791 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:10:38.019231   12791 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:38.019253   12791 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:38.021307   12791 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 21:10:38.021330   12791 addons.go:344] enableAddons completed in 2.663409626s
	I0813 21:10:38.037183   12791 api_server.go:265] https://192.168.39.210:8443/healthz returned 200:
	ok
	I0813 21:10:38.040155   12791 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:10:38.040215   12791 api_server.go:129] duration metric: took 21.924445ms to wait for apiserver health ...
	I0813 21:10:38.040228   12791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:10:38.072532   12791 system_pods.go:59] 8 kube-system pods found
	I0813 21:10:38.072583   12791 system_pods.go:61] "coredns-78fcd69978-42frp" [ffc12ff0-fe4e-422b-ae81-83f17416e379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:10:38.072594   12791 system_pods.go:61] "coredns-78fcd69978-bc587" [0d2dab50-994b-4314-8922-0e8a913a9b26] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:10:38.072605   12791 system_pods.go:61] "etcd-newest-cni-20210813210910-30853" [a6811fb7-a94c-45db-91d0-34c033aa1eab] Running
	I0813 21:10:38.072623   12791 system_pods.go:61] "kube-apiserver-newest-cni-20210813210910-30853" [bdcdda0b-8c06-4c71-8f0a-66d55d331267] Running
	I0813 21:10:38.072630   12791 system_pods.go:61] "kube-controller-manager-newest-cni-20210813210910-30853" [374fba93-8efe-439f-8aec-50ae02d227e3] Running
	I0813 21:10:38.072639   12791 system_pods.go:61] "kube-proxy-qt9ld" [4e36061f-0559-4cde-9b0a-b5cb328d0d76] Running
	I0813 21:10:38.072646   12791 system_pods.go:61] "kube-scheduler-newest-cni-20210813210910-30853" [bdf4950a-8d5e-434c-8c99-20e475c71f65] Running
	I0813 21:10:38.072656   12791 system_pods.go:61] "storage-provisioner" [5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 21:10:38.072667   12791 system_pods.go:74] duration metric: took 32.432184ms to wait for pod list to return data ...
	I0813 21:10:38.072681   12791 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:10:38.079488   12791 default_sa.go:45] found service account: "default"
	I0813 21:10:38.079509   12791 default_sa.go:55] duration metric: took 6.821814ms for default service account to be created ...
	I0813 21:10:38.079522   12791 kubeadm.go:547] duration metric: took 2.721660353s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0813 21:10:38.079544   12791 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:10:38.087838   12791 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.06779332s)
	I0813 21:10:38.087870   12791 start.go:728] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 21:10:38.089094   12791 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:10:38.089130   12791 node_conditions.go:123] node cpu capacity is 2
	I0813 21:10:38.089146   12791 node_conditions.go:105] duration metric: took 9.595836ms to run NodePressure ...
	I0813 21:10:38.089160   12791 start.go:231] waiting for startup goroutines ...
	I0813 21:10:38.151075   12791 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 21:10:38.152833   12791 out.go:177] 
	W0813 21:10:38.153012   12791 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 21:10:38.154648   12791 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:10:38.156287   12791 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813210910-30853" cluster and "default" namespace by default
	I0813 21:10:37.279672   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:37.780229   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:38.279437   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:38.780138   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:39.279696   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:39.780100   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:40.279336   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:40.780189   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:41.279752   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:41.780283   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:42.280242   11600 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:42.595734   11600 kubeadm.go:985] duration metric: took 12.813777513s to wait for elevateKubeSystemPrivileges.
	I0813 21:10:42.595765   11600 kubeadm.go:392] StartCluster complete in 6m12.527422021s
	I0813 21:10:42.595790   11600 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:42.595896   11600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:10:42.597520   11600 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:10:43.236927   11600 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210813205915-30853" rescaled to 1
	I0813 21:10:43.236992   11600 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.105.107 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:10:43.239406   11600 out.go:177] * Verifying Kubernetes components...
	I0813 21:10:43.239457   11600 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:43.237045   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:10:43.237068   11600 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:10:43.239565   11600 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210813205915-30853"
	I0813 21:10:43.237236   11600 config.go:177] Loaded profile config "no-preload-20210813205915-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:10:43.239587   11600 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210813205915-30853"
	W0813 21:10:43.239595   11600 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:10:43.239629   11600 host.go:66] Checking if "no-preload-20210813205915-30853" exists ...
	I0813 21:10:43.239632   11600 addons.go:59] Setting metrics-server=true in profile "no-preload-20210813205915-30853"
	I0813 21:10:43.239635   11600 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210813205915-30853"
	I0813 21:10:43.239647   11600 addons.go:135] Setting addon metrics-server=true in "no-preload-20210813205915-30853"
	W0813 21:10:43.239656   11600 addons.go:147] addon metrics-server should already be in state true
	I0813 21:10:43.239658   11600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210813205915-30853"
	I0813 21:10:43.239683   11600 host.go:66] Checking if "no-preload-20210813205915-30853" exists ...
	I0813 21:10:43.239570   11600 addons.go:59] Setting dashboard=true in profile "no-preload-20210813205915-30853"
	I0813 21:10:43.239728   11600 addons.go:135] Setting addon dashboard=true in "no-preload-20210813205915-30853"
	W0813 21:10:43.239746   11600 addons.go:147] addon dashboard should already be in state true
	I0813 21:10:43.239775   11600 host.go:66] Checking if "no-preload-20210813205915-30853" exists ...
	I0813 21:10:43.240104   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.240104   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.240104   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.240150   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.240220   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.240239   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.240255   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.240314   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.252172   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44137
	I0813 21:10:43.252624   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.253172   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.253192   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.253594   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.254174   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.254214   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.254494   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46479
	I0813 21:10:43.254933   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.255405   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.255426   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.255490   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34649
	I0813 21:10:43.255831   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.256032   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.256290   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.256307   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.256603   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.256646   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.256747   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.256913   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:10:43.266911   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33871
	I0813 21:10:43.267347   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.267815   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.267839   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.268171   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.268724   11600 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210813205915-30853"
	W0813 21:10:43.268749   11600 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:10:43.268762   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.268778   11600 host.go:66] Checking if "no-preload-20210813205915-30853" exists ...
	I0813 21:10:43.268800   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.269179   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.269231   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.270737   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0813 21:10:43.271117   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.271588   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.271614   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.271955   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.272130   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:10:43.275862   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:10:43.278011   11600 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:10:43.278087   11600 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:10:43.278099   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:10:43.278122   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:10:43.280649   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0813 21:10:43.281018   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34451
	I0813 21:10:43.281258   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.281705   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.281820   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.281840   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.282233   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.282382   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.282400   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.282403   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:10:43.282772   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.282933   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:10:43.286320   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:10:43.286532   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.288191   11600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:10:43.286938   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:10:43.288312   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.287193   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:10:43.287628   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:10:43.288362   11600 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:43.288376   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:10:43.288396   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:10:43.288523   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:10:43.289968   11600 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:10:43.288678   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:10:43.291508   11600 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:10:43.291568   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:10:43.291579   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:10:43.290296   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:10:43.291596   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:10:43.292931   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0813 21:10:43.293290   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.293838   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.293859   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.294224   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.294793   11600 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:10:43.294930   11600 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:10:43.296172   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.296766   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:10:43.296794   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.297070   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:10:43.297233   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:10:43.297402   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:10:43.297537   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:10:43.298841   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.299283   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:10:43.299312   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.299430   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:10:43.299586   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:10:43.299727   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:10:43.299911   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:10:43.308859   11600 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40143
	I0813 21:10:43.309223   11600 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:10:43.309689   11600 main.go:130] libmachine: Using API Version  1
	I0813 21:10:43.309713   11600 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:10:43.310081   11600 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:10:43.310261   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetState
	I0813 21:10:43.312995   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .DriverName
	I0813 21:10:43.313192   11600 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:43.313207   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:10:43.313224   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHHostname
	I0813 21:10:43.318697   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.319136   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d2:3d", ip: ""} in network mk-no-preload-20210813205915-30853: {Iface:virbr7 ExpiryTime:2021-08-13 22:04:00 +0000 UTC Type:0 Mac:52:54:00:60:d2:3d Iaid: IPaddr:192.168.105.107 Prefix:24 Hostname:no-preload-20210813205915-30853 Clientid:01:52:54:00:60:d2:3d}
	I0813 21:10:43.319164   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | domain no-preload-20210813205915-30853 has defined IP address 192.168.105.107 and MAC address 52:54:00:60:d2:3d in network mk-no-preload-20210813205915-30853
	I0813 21:10:43.319284   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHPort
	I0813 21:10:43.319423   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHKeyPath
	I0813 21:10:43.319563   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .GetSSHUsername
	I0813 21:10:43.319647   11600 sshutil.go:53] new ssh client: &{IP:192.168.105.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813205915-30853/id_rsa Username:docker}
	I0813 21:10:43.415710   11600 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:10:43.415702   11600 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210813205915-30853" to be "Ready" ...
	I0813 21:10:43.421333   11600 node_ready.go:49] node "no-preload-20210813205915-30853" has status "Ready":"True"
	I0813 21:10:43.421346   11600 node_ready.go:38] duration metric: took 5.531339ms waiting for node "no-preload-20210813205915-30853" to be "Ready" ...
	I0813 21:10:43.421356   11600 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:43.428420   11600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:43.449946   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:10:43.449967   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:10:43.498458   11600 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:10:43.513925   11600 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:10:43.518020   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:10:43.518039   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:10:43.528422   11600 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:10:43.528442   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:10:43.587727   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:10:43.587758   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:10:43.590766   11600 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:10:43.590788   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:10:43.656475   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:10:43.656504   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:10:43.677102   11600 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:10:43.677125   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:10:43.739364   11600 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:10:43.741528   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:10:43.741548   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:10:43.865339   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:10:43.865366   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:10:43.945836   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:10:43.945863   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:10:44.183060   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:10:44.183089   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:10:44.293405   11600 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:10:44.293435   11600 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:10:44.354576   11600 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:10:45.209715   11600 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.793954911s)
	I0813 21:10:45.209754   11600 start.go:728] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS
	I0813 21:10:45.451507   11600 pod_ready.go:102] pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:45.768509   11600 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.254548453s)
	I0813 21:10:45.768554   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:45.768568   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:45.768630   11600 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.270149498s)
	I0813 21:10:45.768648   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:45.768657   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:45.768844   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:45.768865   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:45.768875   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:45.768889   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:45.768988   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:45.768993   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Closing plugin on server side
	I0813 21:10:45.769003   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:45.769017   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:45.769029   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:45.769078   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Closing plugin on server side
	I0813 21:10:45.769100   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:45.769116   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:45.769133   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:45.769142   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:45.769244   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Closing plugin on server side
	I0813 21:10:45.769252   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:45.769266   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:45.770445   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:45.770461   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:46.554301   11600 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.814896952s)
	I0813 21:10:46.554352   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:46.554372   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:46.554651   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:46.554666   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:46.554675   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:46.554682   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:46.554919   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:46.554933   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:46.554943   11600 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210813205915-30853"
	I0813 21:10:46.554967   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Closing plugin on server side
	I0813 21:10:47.488790   11600 pod_ready.go:102] pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:47.928364   11600 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.573734309s)
	I0813 21:10:47.928421   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:47.928444   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:47.928734   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) DBG | Closing plugin on server side
	I0813 21:10:47.928749   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:47.928765   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:47.928786   11600 main.go:130] libmachine: Making call to close driver server
	I0813 21:10:47.928801   11600 main.go:130] libmachine: (no-preload-20210813205915-30853) Calling .Close
	I0813 21:10:47.929007   11600 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:10:47.929021   11600 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:10:47.931002   11600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 21:10:47.931024   11600 addons.go:344] enableAddons completed in 4.693964191s
	I0813 21:10:49.943997   11600 pod_ready.go:102] pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:51.946699   11600 pod_ready.go:102] pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:54.448800   11600 pod_ready.go:102] pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:56.946503   11600 pod_ready.go:102] pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:57.445648   11600 pod_ready.go:97] error getting pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-8cmv5" not found
	I0813 21:10:57.445683   11600 pod_ready.go:81] duration metric: took 14.017237637s waiting for pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace to be "Ready" ...
	E0813 21:10:57.445697   11600 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-8cmv5" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-8cmv5" not found
	I0813 21:10:57.445707   11600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-djqln" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.458150   11600 pod_ready.go:92] pod "coredns-78fcd69978-djqln" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:57.458167   11600 pod_ready.go:81] duration metric: took 12.453041ms waiting for pod "coredns-78fcd69978-djqln" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.458177   11600 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.470140   11600 pod_ready.go:92] pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:57.470159   11600 pod_ready.go:81] duration metric: took 11.975627ms waiting for pod "etcd-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.470169   11600 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.485954   11600 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:57.485976   11600 pod_ready.go:81] duration metric: took 15.799825ms waiting for pod "kube-apiserver-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.485988   11600 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.495923   11600 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:57.495948   11600 pod_ready.go:81] duration metric: took 9.9489ms waiting for pod "kube-controller-manager-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.495962   11600 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pm8kf" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.641582   11600 pod_ready.go:92] pod "kube-proxy-pm8kf" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:57.641605   11600 pod_ready.go:81] duration metric: took 145.634184ms waiting for pod "kube-proxy-pm8kf" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:57.641618   11600 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:58.057929   11600 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace has status "Ready":"True"
	I0813 21:10:58.057949   11600 pod_ready.go:81] duration metric: took 416.322441ms waiting for pod "kube-scheduler-no-preload-20210813205915-30853" in "kube-system" namespace to be "Ready" ...
	I0813 21:10:58.057958   11600 pod_ready.go:38] duration metric: took 14.636591071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:58.057974   11600 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:10:58.058016   11600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:10:58.072729   11600 api_server.go:70] duration metric: took 14.835697758s to wait for apiserver process to appear ...
	I0813 21:10:58.072753   11600 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:10:58.072764   11600 api_server.go:239] Checking apiserver healthz at https://192.168.105.107:8443/healthz ...
	I0813 21:10:58.080263   11600 api_server.go:265] https://192.168.105.107:8443/healthz returned 200:
	ok
	I0813 21:10:58.082131   11600 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:10:58.082151   11600 api_server.go:129] duration metric: took 9.390895ms to wait for apiserver health ...
	I0813 21:10:58.082162   11600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:10:58.250891   11600 system_pods.go:59] 8 kube-system pods found
	I0813 21:10:58.250963   11600 system_pods.go:61] "coredns-78fcd69978-djqln" [99eb1cf7-bc30-4c62-a70a-1d529bd0c68b] Running
	I0813 21:10:58.250985   11600 system_pods.go:61] "etcd-no-preload-20210813205915-30853" [242eb16a-dc65-4352-beb3-09cd64be834c] Running
	I0813 21:10:58.251008   11600 system_pods.go:61] "kube-apiserver-no-preload-20210813205915-30853" [9293ee98-b7b3-47f2-b7bd-4614b8482ca1] Running
	I0813 21:10:58.251025   11600 system_pods.go:61] "kube-controller-manager-no-preload-20210813205915-30853" [91eee213-027e-4385-ab9c-23a1edf8ccde] Running
	I0813 21:10:58.251033   11600 system_pods.go:61] "kube-proxy-pm8kf" [94304ca2-43ad-479d-b0cf-0d034dd53c30] Running
	I0813 21:10:58.251042   11600 system_pods.go:61] "kube-scheduler-no-preload-20210813205915-30853" [63cdc1cb-db75-4391-a159-9f351f3f189b] Running
	I0813 21:10:58.251060   11600 system_pods.go:61] "metrics-server-7c784ccb57-sjf7l" [1a8eb8de-eb5b-4305-9a3c-0f560914ed99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:10:58.251071   11600 system_pods.go:61] "storage-provisioner" [7701997b-7e28-4be2-925c-50ca1dd46b4e] Running
	I0813 21:10:58.251085   11600 system_pods.go:74] duration metric: took 168.915852ms to wait for pod list to return data ...
	I0813 21:10:58.251100   11600 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:10:58.441502   11600 default_sa.go:45] found service account: "default"
	I0813 21:10:58.441527   11600 default_sa.go:55] duration metric: took 190.416989ms for default service account to be created ...
	I0813 21:10:58.441539   11600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:10:58.645345   11600 system_pods.go:86] 8 kube-system pods found
	I0813 21:10:58.645376   11600 system_pods.go:89] "coredns-78fcd69978-djqln" [99eb1cf7-bc30-4c62-a70a-1d529bd0c68b] Running
	I0813 21:10:58.645382   11600 system_pods.go:89] "etcd-no-preload-20210813205915-30853" [242eb16a-dc65-4352-beb3-09cd64be834c] Running
	I0813 21:10:58.645386   11600 system_pods.go:89] "kube-apiserver-no-preload-20210813205915-30853" [9293ee98-b7b3-47f2-b7bd-4614b8482ca1] Running
	I0813 21:10:58.645391   11600 system_pods.go:89] "kube-controller-manager-no-preload-20210813205915-30853" [91eee213-027e-4385-ab9c-23a1edf8ccde] Running
	I0813 21:10:58.645395   11600 system_pods.go:89] "kube-proxy-pm8kf" [94304ca2-43ad-479d-b0cf-0d034dd53c30] Running
	I0813 21:10:58.645400   11600 system_pods.go:89] "kube-scheduler-no-preload-20210813205915-30853" [63cdc1cb-db75-4391-a159-9f351f3f189b] Running
	I0813 21:10:58.645412   11600 system_pods.go:89] "metrics-server-7c784ccb57-sjf7l" [1a8eb8de-eb5b-4305-9a3c-0f560914ed99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:10:58.645418   11600 system_pods.go:89] "storage-provisioner" [7701997b-7e28-4be2-925c-50ca1dd46b4e] Running
	I0813 21:10:58.645427   11600 system_pods.go:126] duration metric: took 203.88379ms to wait for k8s-apps to be running ...
	I0813 21:10:58.645458   11600 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:10:58.645508   11600 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:10:58.657715   11600 system_svc.go:56] duration metric: took 12.247152ms WaitForService to wait for kubelet.
	I0813 21:10:58.657747   11600 kubeadm.go:547] duration metric: took 15.420720912s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:10:58.657776   11600 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:10:58.842378   11600 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:10:58.842407   11600 node_conditions.go:123] node cpu capacity is 2
	I0813 21:10:58.842421   11600 node_conditions.go:105] duration metric: took 184.639144ms to run NodePressure ...
	I0813 21:10:58.842431   11600 start.go:231] waiting for startup goroutines ...
	I0813 21:10:58.885572   11600 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 21:10:58.887709   11600 out.go:177] 
	W0813 21:10:58.887907   11600 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 21:10:58.889610   11600 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:10:58.891372   11600 out.go:177] * Done! kubectl is now configured to use "no-preload-20210813205915-30853" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 21:03:59 UTC, end at Fri 2021-08-13 21:11:20 UTC. --
	Aug 13 21:11:18 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:18.679266253Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,StartedAt:1628889049986477926,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/7701997b-7e28-4be2-925c-50ca1dd46b4e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/7701997b-7e28-4be2-925c-50ca1dd46b4e/containers/storage-provisioner/2a731eea,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/7701997b-7e28-4be2-925c-50ca1dd46b4e/volumes/kubernetes.io~projected/kube-api-access-l98z4,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_7701997b-7e28-4be2-925c-50ca1dd46b4e/storage
-provisioner/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=3d359ce0-0cb3-4503-8a4e-400af20dfe44 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.698512104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8d0e2c6f-c791-4e0f-bba5-47c504a53978 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.698598214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8d0e2c6f-c791-4e0f-bba5-47c504a53978 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.698853097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8d0e2c6f-c791-4e0f-bba5-47c504a53978 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.737227575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6a4aef2b-1235-4235-b5d4-9b3d67c5cb1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.737384378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6a4aef2b-1235-4235-b5d4-9b3d67c5cb1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.737602730Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6a4aef2b-1235-4235-b5d4-9b3d67c5cb1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.774684961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2ef50264-51fe-4667-b1db-551b402c4645 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.774831366Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2ef50264-51fe-4667-b1db-551b402c4645 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.775117762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2ef50264-51fe-4667-b1db-551b402c4645 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.821343899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cab45126-44d6-4d8e-b3c5-0fbd58ad8041 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.821521787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cab45126-44d6-4d8e-b3c5-0fbd58ad8041 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.824860135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cab45126-44d6-4d8e-b3c5-0fbd58ad8041 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.877398116Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=04d016aa-c977-4a25-bba1-11432620958e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.877538627Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=04d016aa-c977-4a25-bba1-11432620958e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.877772176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=04d016aa-c977-4a25-bba1-11432620958e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.920677907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7f9a414f-8dc3-47b8-96d4-1cbc8841a0ea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.920734340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7f9a414f-8dc3-47b8-96d4-1cbc8841a0ea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.920947668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7f9a414f-8dc3-47b8-96d4-1cbc8841a0ea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.959521990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=77db5553-4e64-4b30-954b-adf062ad90f9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.959579151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=77db5553-4e64-4b30-954b-adf062ad90f9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:19 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.960243737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=77db5553-4e64-4b30-954b-adf062ad90f9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:20 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:19.999974685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=62a143c9-fff8-4e1c-bc31-52b77ff1fe97 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:20 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:20.000027393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=62a143c9-fff8-4e1c-bc31-52b77ff1fe97 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:11:20 no-preload-20210813205915-30853 crio[2040]: time="2021-08-13 21:11:20.000298491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6,PodSandboxId:87d91f84471b4c902bb706b1e8f970e14c1a7f6d3be78fd3c6dbd9bfe27984e5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,Annotations:map[string]string{},},ImageRef:docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f,State:CONTAINER_RUNNING,CreatedAt:1628889063346481332,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-6fcdf4f6d-vl8vp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 772a7e9a-469e-46a7-9d84-da2b0f029cb7,},Annotations:map[string]string{io.kuberne
tes.container.hash: cf82f44f,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0,PodSandboxId:0bb2fe30a9e8fd403bf3b33355529cac3f0a2f777f4f5f5cf9912f174735d436,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb,State:CONTAINER_EXITED,CreatedAt:1628889059294931256,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-8685c45546-jq4mn,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 1b135bae-5a0b-452f-ac13-77578d4f5d7b,},Annotations:map[string]string{io.kubernetes.container.hash: b97f901c,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf,PodSandboxId:023a3746224ec9401194ccd72bf40bbcc3d3f47eec96f41df5b49591c068168b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628889049870366054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7701997b-7e28-4be2-925c-50ca1dd46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: a45e10a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c,PodSandboxId:6e72a1de7959a0b2cb5731abfbee0260fe422501e90f45517655cba55a590288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:c395db36404e62624c217f1d5a1eb985a4b27acc0b206362bcfb77074a47bce5,State:CONTAINER_RUNNING,CreatedAt:1628889048429879275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pm8kf,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94304ca2-43ad-479d-b0cf-0d034dd53c30,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd8ec32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9,PodSandboxId:d70372cc3523e5cb156d7e4e244d1b57eca8773239de1dcfb042401e632554b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CONTAINER_RUNNING,CreatedAt:1628889047820393572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-djqln,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 99eb1cf7-bc30-4c62-a70a-1d529bd0c68b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7e6162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3,PodSandboxId:7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4,State:CO
NTAINER_EXITED,CreatedAt:1628889046675482222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-78fcd69978-8cmv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c79edd-1c3d-4902-9ba5-604d2bf0cb16,},Annotations:map[string]string{io.kubernetes.container.hash: 8a85231a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357,PodSandboxId:5eb86fb89bad6e1314201fa2bb4b4723cb91a35e9a274699211b85320f6be0f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0048118155842e4c91f0498dd2
98b8e93dc3aecc7052d9882b76f48e311a76ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889019578216193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecdd2063d998946a324c2f9eb9a9f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 14f33e5f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0,PodSandboxId:24d192076c6efb85520ce0d96af284c3279355d1c6e0e01f8ef00c2c45f38098,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f
5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:4bde538274037ea6b3b82c4e651f74bf6525576720016d5dc50810460225ac88,State:CONTAINER_RUNNING,CreatedAt:1628889019422618591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9dfc9e43c55f87ba99ef680db32e7f,},Annotations:map[string]string{io.kubernetes.container.hash: fe75c9af,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b,PodSandboxId:5c6071832ca8b8935dac81f92d083cefa858a64500e9956659ec4cfdee5f3280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772
092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:c3feab9259b87bafe671ae1ad935b50368d023e27986b135051a87c2a8720d6a,State:CONTAINER_RUNNING,CreatedAt:1628889018846001347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f5b65a88bdbb7dbcdcc7221494a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: cf90c5cb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455,PodSandboxId:9394e33d36b0239f5c40caf76694cc1a33a5a6ba5fa252577aba5bd2c246b23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e
0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:18cddaf29eedf92cae9b7552d1b36bb8c0034a97aa0ef6600e03cc69770d8a89,State:CONTAINER_RUNNING,CreatedAt:1628889018370959845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-20210813205915-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2eb7e44b4bcbc213126c087d7394ff,},Annotations:map[string]string{io.kubernetes.container.hash: ed9d8f45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=62a143c9-fff8-4e1c-bc31-52b77ff1fe97 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID
	b9047880f9040       docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f   16 seconds ago       Running             kubernetes-dashboard        0                   87d91f84471b4
	2b6e74c197286       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   1                   0bb2fe30a9e8f
	efd6ad12aeb56       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           30 seconds ago       Running             storage-provisioner         0                   023a3746224ec
	25c01a205c3e2       ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c                                           31 seconds ago       Running             kube-proxy                  0                   6e72a1de7959a
	7dccf87cedabe       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44                                           32 seconds ago       Running             coredns                     0                   d70372cc3523e
	ca410dc379be2       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44                                           33 seconds ago       Exited              coredns                     0                   7f0eb16b3e187
	9212298dc475e       0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba                                           About a minute ago   Running             etcd                        2                   5eb86fb89bad6
	34a67fc4c35df       7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75                                           About a minute ago   Running             kube-scheduler              2                   24d192076c6ef
	94a7894b63ddd       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c                                           About a minute ago   Running             kube-controller-manager     3                   5c6071832ca8b
	fa3d77da505a8       b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a                                           About a minute ago   Running             kube-apiserver              2                   9394e33d36b02
	
	* 
	* ==> coredns [7dccf87cedabe845110c6f3b366b12cd084dd26baa8e94570794996e29a0e8f9] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> coredns [ca410dc379be23d60253e103458eba0c3c14829fd784dc2b8b5d507526bba5e3] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20210813205915-30853
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20210813205915-30853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=no-preload-20210813205915-30853
	                    minikube.k8s.io/updated_at=2021_08_13T21_10_29_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 21:10:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20210813205915-30853
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 21:11:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 21:11:05 +0000   Fri, 13 Aug 2021 21:10:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 21:11:05 +0000   Fri, 13 Aug 2021 21:10:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 21:11:05 +0000   Fri, 13 Aug 2021 21:10:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 21:11:05 +0000   Fri, 13 Aug 2021 21:10:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.107
	  Hostname:    no-preload-20210813205915-30853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e63b2204c374f92b3ae1588b0df2556
	  System UUID:                4e63b220-4c37-4f92-b3ae-1588b0df2556
	  Boot ID:                    f8ffe43d-74d2-470d-ae6c-3ef2eea0cc3d
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.22.0-rc.0
	  Kube-Proxy Version:         v1.22.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-djqln                                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     38s
	  kube-system                 etcd-no-preload-20210813205915-30853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         45s
	  kube-system                 kube-apiserver-no-preload-20210813205915-30853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kube-controller-manager-no-preload-20210813205915-30853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kube-proxy-pm8kf                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 kube-scheduler-no-preload-20210813205915-30853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 metrics-server-7c784ccb57-sjf7l                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (14%!)(MISSING)      0 (0%!)(MISSING)         34s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-jq4mn                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-vl8vp                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             470Mi (22%!)(MISSING)  170Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  64s (x6 over 64s)  kubelet  Node no-preload-20210813205915-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x6 over 64s)  kubelet  Node no-preload-20210813205915-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x5 over 64s)  kubelet  Node no-preload-20210813205915-30853 status is now: NodeHasSufficientPID
	  Normal  Starting                 46s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s                kubelet  Node no-preload-20210813205915-30853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s                kubelet  Node no-preload-20210813205915-30853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s                kubelet  Node no-preload-20210813205915-30853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  45s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                38s                kubelet  Node no-preload-20210813205915-30853 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.033491] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.024807] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1718 comm=systemd-network
	[Aug13 21:04] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.357120] vboxguest: loading out-of-tree module taints kernel.
	[  +0.014328] vboxguest: PCI device not found, probably running on physical hardware.
	[  +3.482256] systemd-fstab-generator[2115]: Ignoring "noauto" for root device
	[  +0.157515] systemd-fstab-generator[2128]: Ignoring "noauto" for root device
	[  +0.205705] systemd-fstab-generator[2154]: Ignoring "noauto" for root device
	[ +29.221301] systemd-fstab-generator[2922]: Ignoring "noauto" for root device
	[Aug13 21:05] kauditd_printk_skb: 38 callbacks suppressed
	[ +11.840286] kauditd_printk_skb: 89 callbacks suppressed
	[Aug13 21:06] NFSD: Unable to end grace period: -110
	[Aug13 21:09] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.154978] kauditd_printk_skb: 14 callbacks suppressed
	[ +16.228089] kauditd_printk_skb: 14 callbacks suppressed
	[Aug13 21:10] systemd-fstab-generator[5166]: Ignoring "noauto" for root device
	[ +18.484011] systemd-fstab-generator[5528]: Ignoring "noauto" for root device
	[ +15.353956] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.119674] kauditd_printk_skb: 77 callbacks suppressed
	[  +7.555194] kauditd_printk_skb: 32 callbacks suppressed
	[Aug13 21:11] kauditd_printk_skb: 8 callbacks suppressed
	[ +10.735187] systemd-fstab-generator[7065]: Ignoring "noauto" for root device
	[  +0.827429] systemd-fstab-generator[7121]: Ignoring "noauto" for root device
	[  +1.026098] systemd-fstab-generator[7175]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [9212298dc475e4a9b172bb2b482d02f9dd07a05b7e02c7a75bb7a8c7eb736357] <==
	* {"level":"info","ts":"2021-08-13T21:10:20.982Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"96c7252540d6160b","local-member-attributes":"{Name:no-preload-20210813205915-30853 ClientURLs:[https://192.168.105.107:2379]}","request-path":"/0/members/96c7252540d6160b/attributes","cluster-id":"ceda1e46dcc8afbb","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-13T21:10:20.984Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T21:10:20.985Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.105.107:2379"}
	{"level":"info","ts":"2021-08-13T21:10:20.985Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T21:10:20.986Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T21:10:20.990Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-08-13T21:10:20.990Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-13T21:10:20.990Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-13T21:10:20.996Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"ceda1e46dcc8afbb","local-member-id":"96c7252540d6160b","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T21:10:20.999Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T21:10:20.999Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2021-08-13T21:10:25.674Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.550074ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1588498814190764221 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.events.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.events.k8s.io\" value_size:887 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2021-08-13T21:10:25.674Z","caller":"traceutil/trace.go:171","msg":"trace[466367915] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"183.081565ms","start":"2021-08-13T21:10:25.491Z","end":"2021-08-13T21:10:25.674Z","steps":["trace[466367915] 'process raft request'  (duration: 63.556934ms)","trace[466367915] 'compare'  (duration: 117.56781ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T21:10:25.675Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"183.881298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2021-08-13T21:10:25.675Z","caller":"traceutil/trace.go:171","msg":"trace[1359346222] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:0; response_revision:23; }","duration":"184.101664ms","start":"2021-08-13T21:10:25.491Z","end":"2021-08-13T21:10:25.675Z","steps":["trace[1359346222] 'agreement among raft nodes before linearized reading'  (duration: 63.471235ms)","trace[1359346222] 'range keys from in-memory index tree'  (duration: 120.372978ms)"],"step_count":2}
	{"level":"info","ts":"2021-08-13T21:10:25.676Z","caller":"traceutil/trace.go:171","msg":"trace[94189713] linearizableReadLoop","detail":"{readStateIndex:32; appliedIndex:26; }","duration":"121.583518ms","start":"2021-08-13T21:10:25.554Z","end":"2021-08-13T21:10:25.676Z","steps":["trace[94189713] 'read index received'  (duration: 118.465448ms)","trace[94189713] 'applied index is now lower than readState.Index'  (duration: 3.105664ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T21:10:25.676Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"184.865694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/kube-system/\" range_end:\"/registry/resourcequotas/kube-system0\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2021-08-13T21:10:25.676Z","caller":"traceutil/trace.go:171","msg":"trace[1201218958] range","detail":"{range_begin:/registry/resourcequotas/kube-system/; range_end:/registry/resourcequotas/kube-system0; response_count:0; response_revision:29; }","duration":"185.047263ms","start":"2021-08-13T21:10:25.491Z","end":"2021-08-13T21:10:25.676Z","steps":["trace[1201218958] 'agreement among raft nodes before linearized reading'  (duration: 184.715477ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T21:10:25.677Z","caller":"traceutil/trace.go:171","msg":"trace[419869988] transaction","detail":"{read_only:false; response_revision:25; number_of_response:1; }","duration":"185.538499ms","start":"2021-08-13T21:10:25.491Z","end":"2021-08-13T21:10:25.677Z","steps":["trace[419869988] 'process raft request'  (duration: 182.830629ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T21:10:25.677Z","caller":"traceutil/trace.go:171","msg":"trace[21709021] transaction","detail":"{read_only:false; response_revision:26; number_of_response:1; }","duration":"183.435038ms","start":"2021-08-13T21:10:25.494Z","end":"2021-08-13T21:10:25.677Z","steps":["trace[21709021] 'process raft request'  (duration: 181.472896ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T21:10:25.678Z","caller":"traceutil/trace.go:171","msg":"trace[39482200] transaction","detail":"{read_only:false; response_revision:27; number_of_response:1; }","duration":"181.216756ms","start":"2021-08-13T21:10:25.496Z","end":"2021-08-13T21:10:25.678Z","steps":["trace[39482200] 'process raft request'  (duration: 178.96379ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T21:10:25.678Z","caller":"traceutil/trace.go:171","msg":"trace[2091172688] transaction","detail":"{read_only:false; response_revision:28; number_of_response:1; }","duration":"179.854409ms","start":"2021-08-13T21:10:25.498Z","end":"2021-08-13T21:10:25.678Z","steps":["trace[2091172688] 'process raft request'  (duration: 177.412662ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T21:10:25.678Z","caller":"traceutil/trace.go:171","msg":"trace[1955463242] transaction","detail":"{read_only:false; response_revision:29; number_of_response:1; }","duration":"179.886275ms","start":"2021-08-13T21:10:25.498Z","end":"2021-08-13T21:10:25.678Z","steps":["trace[1955463242] 'process raft request'  (duration: 177.391033ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:10:25.679Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"130.345052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2021-08-13T21:10:25.679Z","caller":"traceutil/trace.go:171","msg":"trace[52079535] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:0; response_revision:29; }","duration":"130.525353ms","start":"2021-08-13T21:10:25.548Z","end":"2021-08-13T21:10:25.679Z","steps":["trace[52079535] 'agreement among raft nodes before linearized reading'  (duration: 130.295644ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  21:11:20 up 7 min,  0 users,  load average: 2.01, 0.84, 0.39
	Linux no-preload-20210813205915-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [fa3d77da505a8258807c0abbde0dbba4e36c88cadcaa65a9e0803443d856a455] <==
	* I0813 21:10:25.401281       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0813 21:10:25.411578       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0813 21:10:25.415749       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 21:10:25.418890       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 21:10:25.479256       1 controller.go:611] quota admission added evaluator for: namespaces
	I0813 21:10:26.193514       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 21:10:26.193656       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 21:10:26.217267       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 21:10:26.229620       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 21:10:26.229750       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 21:10:27.383464       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 21:10:27.475877       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 21:10:27.600268       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.105.107]
	I0813 21:10:27.601969       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 21:10:27.616325       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 21:10:28.374203       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 21:10:29.548700       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 21:10:29.677140       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 21:10:34.943595       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 21:10:42.061016       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 21:10:42.293352       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	W0813 21:10:48.844616       1 handler_proxy.go:104] no RequestInfo found in the context
	E0813 21:10:48.844896       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 21:10:48.844988       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [94a7894b63ddd9914beca752deee20419952c82ccc23bae1f8fb6b765d19709b] <==
	* E0813 21:10:46.838462       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:46.872718       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0813 21:10:46.905564       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0813 21:10:46.927969       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:46.933545       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:46.982501       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:10:46.982953       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:46.983342       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:10:47.026346       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:10:47.066378       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:47.067188       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:47.095653       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:47.096290       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:47.128994       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:47.129551       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:47.144354       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:47.144845       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:47.158940       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:47.159304       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:10:47.174487       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:10:47.175186       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:10:47.209857       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-vl8vp"
	I0813 21:10:47.282990       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-jq4mn"
	E0813 21:11:12.138998       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 21:11:12.664368       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [25c01a205c3e20978b3bc32e3e0dcddf8c4a3d0af7f1c51ba8a8f04e29fdfc8c] <==
	* I0813 21:10:48.853441       1 node.go:172] Successfully retrieved node IP: 192.168.105.107
	I0813 21:10:48.853627       1 server_others.go:140] Detected node IP 192.168.105.107
	W0813 21:10:48.853660       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	W0813 21:10:48.967405       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 21:10:48.967513       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 21:10:48.967532       1 server_others.go:212] Using iptables Proxier.
	I0813 21:10:48.967851       1 server.go:649] Version: v1.22.0-rc.0
	I0813 21:10:48.982302       1 config.go:224] Starting endpoint slice config controller
	I0813 21:10:48.982408       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 21:10:48.982502       1 config.go:315] Starting service config controller
	I0813 21:10:48.982589       1 shared_informer.go:240] Waiting for caches to sync for service config
	E0813 21:10:48.996165       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210813205915-30853.169af9f5b64b81d6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd5d639bafe6d, ext:384291207, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210813205915-30853", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Nam
e:"no-preload-20210813205915-30853", UID:"no-preload-20210813205915-30853", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210813205915-30853.169af9f5b64b81d6" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 21:10:49.082710       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 21:10:49.107748       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [34a67fc4c35dfc5785a04c461cc2101390e8c61a52e316ba718bc817bc0552e0] <==
	* E0813 21:10:25.422879       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:10:25.431464       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:10:25.431869       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:10:25.432124       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:10:25.432305       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:10:25.432483       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:10:25.432748       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:10:25.432922       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:10:25.436389       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:10:26.310536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:10:26.415612       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:10:26.475529       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:10:26.497944       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:10:26.503605       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:10:26.532670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 21:10:26.709350       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:10:26.757967       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:10:26.775464       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:10:26.776592       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:10:26.790626       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:10:26.854351       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:10:26.886705       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:10:26.935133       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:10:27.009423       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0813 21:10:28.990359       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:03:59 UTC, end at Fri 2021-08-13 21:11:20 UTC. --
	Aug 13 21:10:54 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:54.984645    5537 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7f0eb16b3e1877e01c21a4c9651540046f0c605d44122cda6986a1d8418d57cd"
	Aug 13 21:10:56 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:56.104363    5537 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x5kj\" (UniqueName: \"kubernetes.io/projected/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16-kube-api-access-8x5kj\") pod \"b2c79edd-1c3d-4902-9ba5-604d2bf0cb16\" (UID: \"b2c79edd-1c3d-4902-9ba5-604d2bf0cb16\") "
	Aug 13 21:10:56 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:56.104415    5537 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16-config-volume\") pod \"b2c79edd-1c3d-4902-9ba5-604d2bf0cb16\" (UID: \"b2c79edd-1c3d-4902-9ba5-604d2bf0cb16\") "
	Aug 13 21:10:56 no-preload-20210813205915-30853 kubelet[5537]: W0813 21:10:56.106628    5537 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 13 21:10:56 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:56.107641    5537 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16-config-volume" (OuterVolumeSpecName: "config-volume") pod "b2c79edd-1c3d-4902-9ba5-604d2bf0cb16" (UID: "b2c79edd-1c3d-4902-9ba5-604d2bf0cb16"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 21:10:56 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:56.115778    5537 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16-kube-api-access-8x5kj" (OuterVolumeSpecName: "kube-api-access-8x5kj") pod "b2c79edd-1c3d-4902-9ba5-604d2bf0cb16" (UID: "b2c79edd-1c3d-4902-9ba5-604d2bf0cb16"). InnerVolumeSpecName "kube-api-access-8x5kj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 21:10:56 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:56.205382    5537 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16-config-volume\") on node \"no-preload-20210813205915-30853\" DevicePath \"\""
	Aug 13 21:10:56 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:56.205417    5537 reconciler.go:319] "Volume detached for volume \"kube-api-access-8x5kj\" (UniqueName: \"kubernetes.io/projected/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16-kube-api-access-8x5kj\") on node \"no-preload-20210813205915-30853\" DevicePath \"\""
	Aug 13 21:10:57 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:57.118281    5537 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b2c79edd-1c3d-4902-9ba5-604d2bf0cb16 path="/var/lib/kubelet/pods/b2c79edd-1c3d-4902-9ba5-604d2bf0cb16/volumes"
	Aug 13 21:10:59 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:10:59.019228    5537 scope.go:110] "RemoveContainer" containerID="e8819b1c4b8b8e0d8501b29e570d8970a455be807cb8584920ed19f31e409ccf"
	Aug 13 21:11:00 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:11:00.033240    5537 scope.go:110] "RemoveContainer" containerID="e8819b1c4b8b8e0d8501b29e570d8970a455be807cb8584920ed19f31e409ccf"
	Aug 13 21:11:00 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:11:00.033575    5537 scope.go:110] "RemoveContainer" containerID="2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0"
	Aug 13 21:11:00 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:00.033825    5537 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-jq4mn_kubernetes-dashboard(1b135bae-5a0b-452f-ac13-77578d4f5d7b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-jq4mn" podUID=1b135bae-5a0b-452f-ac13-77578d4f5d7b
	Aug 13 21:11:01 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:11:01.051765    5537 scope.go:110] "RemoveContainer" containerID="2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0"
	Aug 13 21:11:01 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:01.057414    5537 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-jq4mn_kubernetes-dashboard(1b135bae-5a0b-452f-ac13-77578d4f5d7b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-jq4mn" podUID=1b135bae-5a0b-452f-ac13-77578d4f5d7b
	Aug 13 21:11:03 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:03.183553    5537 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:11:03 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:03.183593    5537 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:11:03 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:03.183713    5537 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9qnsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-sjf7l_kube-system(1a8eb8de-eb5b-4305-9a3c-0f560914ed99): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:03 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:03.184859    5537 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-sjf7l" podUID=1a8eb8de-eb5b-4305-9a3c-0f560914ed99
	Aug 13 21:11:07 no-preload-20210813205915-30853 kubelet[5537]: I0813 21:11:07.365736    5537 scope.go:110] "RemoveContainer" containerID="2b6e74c1972860bd28cb035e4696884b6fb5e1c0ddff32aaf2b94c3b2e92a6a0"
	Aug 13 21:11:07 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:07.366383    5537 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-jq4mn_kubernetes-dashboard(1b135bae-5a0b-452f-ac13-77578d4f5d7b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-jq4mn" podUID=1b135bae-5a0b-452f-ac13-77578d4f5d7b
	Aug 13 21:11:14 no-preload-20210813205915-30853 kubelet[5537]: E0813 21:11:14.103408    5537 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-sjf7l" podUID=1a8eb8de-eb5b-4305-9a3c-0f560914ed99
	Aug 13 21:11:15 no-preload-20210813205915-30853 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:11:15 no-preload-20210813205915-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:11:15 no-preload-20210813205915-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [b9047880f9040b587ab51fa76100c6d99ff26b9f39cc522d47b2878b5bad5bc6] <==
	* 2021/08/13 21:11:03 Starting overwatch
	2021/08/13 21:11:03 Using namespace: kubernetes-dashboard
	2021/08/13 21:11:03 Using in-cluster config to connect to apiserver
	2021/08/13 21:11:03 Using secret token for csrf signing
	2021/08/13 21:11:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:11:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:11:03 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/13 21:11:03 Generating JWE encryption key
	2021/08/13 21:11:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:11:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:11:03 Initializing JWE encryption key from synchronized object
	2021/08/13 21:11:03 Creating in-cluster Sidecar client
	2021/08/13 21:11:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:11:03 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [efd6ad12aeb56882ff2de6cd5230977147e87f3e50776412888b22c23a345abf] <==
	* I0813 21:10:50.038904       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 21:10:50.065326       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 21:10:50.065697       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 21:10:50.094327       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 21:10:50.096537       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20210813205915-30853_9e80c5ac-1545-45dc-a7ce-f7e1c00875a2!
	I0813 21:10:50.120317       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea4aeaf6-3f36-47b2-a3e5-385b27615b0f", APIVersion:"v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20210813205915-30853_9e80c5ac-1545-45dc-a7ce-f7e1c00875a2 became leader
	I0813 21:10:50.211882       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20210813205915-30853_9e80c5ac-1545-45dc-a7ce-f7e1c00875a2!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210813205915-30853 -n no-preload-20210813205915-30853
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210813205915-30853 -n no-preload-20210813205915-30853: exit status 2 (254.334858ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context no-preload-20210813205915-30853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-sjf7l
helpers_test.go:273: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context no-preload-20210813205915-30853 describe pod metrics-server-7c784ccb57-sjf7l
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20210813205915-30853 describe pod metrics-server-7c784ccb57-sjf7l: exit status 1 (65.068684ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-sjf7l" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context no-preload-20210813205915-30853 describe pod metrics-server-7c784ccb57-sjf7l: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (85.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210813210910-30853 --alsologtostderr -v=1
E0813 21:12:56.748975   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-20210813210910-30853 --alsologtostderr -v=1: exit status 80 (2.445899829s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-20210813210910-30853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 21:12:56.364046   14630 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:12:56.364171   14630 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:56.364184   14630 out.go:311] Setting ErrFile to fd 2...
	I0813 21:12:56.364190   14630 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:56.364339   14630 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:12:56.364557   14630 out.go:305] Setting JSON to false
	I0813 21:12:56.364577   14630 mustload.go:65] Loading cluster: newest-cni-20210813210910-30853
	I0813 21:12:56.365962   14630 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:12:56.366764   14630 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:56.366811   14630 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:56.377546   14630 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0813 21:12:56.378029   14630 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:56.378582   14630 main.go:130] libmachine: Using API Version  1
	I0813 21:12:56.378604   14630 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:56.379025   14630 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:56.379212   14630 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:12:56.382219   14630 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:12:56.382554   14630 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:56.382593   14630 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:56.392943   14630 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37595
	I0813 21:12:56.393303   14630 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:56.393731   14630 main.go:130] libmachine: Using API Version  1
	I0813 21:12:56.393753   14630 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:56.394021   14630 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:56.394187   14630 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:12:56.394737   14630 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-20210813210910-30853 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 21:12:56.396933   14630 out.go:177] * Pausing node newest-cni-20210813210910-30853 ... 
	I0813 21:12:56.396968   14630 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:12:56.397393   14630 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:56.397437   14630 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:56.407472   14630 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0813 21:12:56.407875   14630 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:56.408375   14630 main.go:130] libmachine: Using API Version  1
	I0813 21:12:56.408402   14630 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:56.408776   14630 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:56.408983   14630 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:12:56.409206   14630 ssh_runner.go:149] Run: systemctl --version
	I0813 21:12:56.409235   14630 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:12:56.414789   14630 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:56.415308   14630 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:12:56.415341   14630 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:56.415480   14630 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:12:56.415640   14630 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:12:56.415790   14630 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:12:56.415918   14630 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:12:56.536876   14630 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:12:56.549498   14630 pause.go:50] kubelet running: true
	I0813 21:12:56.549563   14630 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:12:56.888343   14630 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:12:56.888463   14630 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:12:57.012015   14630 cri.go:76] found id: "e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669"
	I0813 21:12:57.012045   14630 cri.go:76] found id: "21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe"
	I0813 21:12:57.012050   14630 cri.go:76] found id: "f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24"
	I0813 21:12:57.012054   14630 cri.go:76] found id: "81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203"
	I0813 21:12:57.012058   14630 cri.go:76] found id: "5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71"
	I0813 21:12:57.012062   14630 cri.go:76] found id: "f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78"
	I0813 21:12:57.012067   14630 cri.go:76] found id: "09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291"
	I0813 21:12:57.012070   14630 cri.go:76] found id: ""
	I0813 21:12:57.012111   14630 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 21:12:57.059700   14630 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291","pid":2608,"status":"running","bundle":"/run/containers/storage/overlay-containers/09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291/userdata","rootfs":"/var/lib/containers/storage/overlay/113aacb9174c2faefce53ee11ef68088562e27ebc366e16860d30d835b7b5124/merged","created":"2021-08-13T21:12:27.070852027Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ffb6a91b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ffb6a91b\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.902176937Z","io.kubernetes.cri-o.Image":"b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"32688baa2c6a65d13ce71d2e854f4832\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210813210910-30853_32688baa2c6a65d13ce71d2e854f4832/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata"
:"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/113aacb9174c2faefce53ee11ef68088562e27ebc366e16860d30d835b7b5124/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-newest-cni-20210813210910-30853_kube-system_32688baa2c6a65d13ce71d2e854f4832_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-newest-cni-20210813210910-30853_kube-system_32688baa2c6a65d13ce71d2e854f4832_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/32688baa2c6a65d13
ce71d2e854f4832/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/32688baa2c6a65d13ce71d2e854f4832/containers/kube-apiserver/bd3cb872\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"32688baa2c6a65d13ce71d2e854f4832","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.210:8443","kubernetes.io/config.hash":"32688baa2c6a65d13ce71d2e854f4832","kubernetes.io/config.seen":"2021-08-13T21:12:24.637275916Z","kubernetes.io/config.source":"file","org.systemd
.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe","pid":3275,"status":"running","bundle":"/run/containers/storage/overlay-containers/21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe/userdata","rootfs":"/var/lib/containers/storage/overlay/f423c225b9d8b2fa7970566f83c5a865812205115d572bce7bf004becf8b9891/merged","created":"2021-08-13T21:12:53.604162723Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6be87df7","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6be87df7\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessag
ePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:53.320445828Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/storage-provisioner/0.l
og","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f423c225b9d8b2fa7970566f83c5a865812205115d572bce7bf004becf8b9891/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":
\"/var/lib/kubelet/pods/5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/containers/storage-provisioner/3b35aef2\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/volumes/kubernetes.io~projected/kube-api-access-pd5tc\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\
"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T21:12:49.648945632Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995","pid":2526,"status":"running","bundle":"/run/containers/storage/overlay-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata","rootfs":"/var/lib/containers/storage/overlay/bd6fd78678ecd2c6f2642788b570990a1f4d97c8623e3a6748bae10ad28611c0/merged","created":"2021-08-13T21:12:26
.383071519Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"eab7e5e84ea4e6309241a6623f47ddd8\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.210:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T21:12:24.637273650Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podeab7e5e84ea4e6309241a6623f47ddd8.slice","io.kubernetes.cri-o.ContainerID":"248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-newest-cni-20210813210910-30853_kube-system_eab7e5e84ea4e6309241a6623f47ddd8_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.146024584Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overl
ay-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-newest-cni-20210813210910-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"eab7e5e84ea4e6309241a6623f47ddd8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-newest-cni-20210813210910-30853\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-newest-cni-20210813210910-30853_eab7e5e84ea4e6309241a6623f47ddd8/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-newest-cni-20210813210910-30853\",\"uid\":\"eab7e5e84ea4e6309241a6623f47ddd8\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/bd6fd78678ecd2c6f2642788b570990a1f4d97c8623e3a6748bae10ad28611c0/merged","i
o.kubernetes.cri-o.Name":"k8s_etcd-newest-cni-20210813210910-30853_kube-system_eab7e5e84ea4e6309241a6623f47ddd8_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata/shm","io.kubernetes.pod.name":"etcd-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"eab7e5e84ea4e6309241a6623f47ddd8","kubeadm.kubernetes
.io/etcd.advertise-client-urls":"https://192.168.39.210:2379","kubernetes.io/config.hash":"eab7e5e84ea4e6309241a6623f47ddd8","kubernetes.io/config.seen":"2021-08-13T21:12:24.637273650Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534","pid":2534,"status":"running","bundle":"/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata","rootfs":"/var/lib/containers/storage/overlay/3a4a1b30f59ea92fe7dd7e1af5199e72e21588290b3f4e853f39c0287482d964/merged","created":"2021-08-13T21:12:26.433134857Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"32688baa2c6a65d13ce71d2e854f4832\",\"kubeadm.kubernetes.io/kube-
apiserver.advertise-address.endpoint\":\"192.168.39.210:8443\",\"kubernetes.io/config.seen\":\"2021-08-13T21:12:24.637275916Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod32688baa2c6a65d13ce71d2e854f4832.slice","io.kubernetes.cri-o.ContainerID":"433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-newest-cni-20210813210910-30853_kube-system_32688baa2c6a65d13ce71d2e854f4832_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.141226865Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-newest-cni-20210813210910-30853","io.kubernetes.cri-o.Labels":"{\"component
\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"32688baa2c6a65d13ce71d2e854f4832\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-apiserver-newest-cni-20210813210910-30853\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210813210910-30853_32688baa2c6a65d13ce71d2e854f4832/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-newest-cni-20210813210910-30853\",\"uid\":\"32688baa2c6a65d13ce71d2e854f4832\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3a4a1b30f59ea92fe7dd7e1af5199e72e21588290b3f4e853f39c0287482d964/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-newest-cni-20210813210910-30853_kube-system_32688baa2c6a65d13ce71d2e854f4832_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network
\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"32688baa2c6a65d13ce71d2e854f4832","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.210:8443","kubernetes.io/config.hash":"32688baa2c6a65d13ce71d2e854f4832","kubernetes.io/config.seen":"2021-08-13T21:12:24.637275916Z","kuberne
tes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71","pid":2674,"status":"running","bundle":"/run/containers/storage/overlay-containers/5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71/userdata","rootfs":"/var/lib/containers/storage/overlay/314ab571072ff1ea8a263a2b5f70aa8a6c91666442868285983e696965d2429a/merged","created":"2021-08-13T21:12:27.629888101Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a0decd21","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a0decd21\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMess
agePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:27.413516541Z","io.kubernetes.cri-o.Image":"7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"42b2831a6feaa48869fe13cec6b8ce22\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210813210910-30853_42b2831a6feaa4886
9fe13cec6b8ce22/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/314ab571072ff1ea8a263a2b5f70aa8a6c91666442868285983e696965d2429a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-newest-cni-20210813210910-30853_kube-system_42b2831a6feaa48869fe13cec6b8ce22_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-newest-cni-20210813210910-30853_kube-system_42b2831a6feaa48869fe13cec6b8ce22_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"
/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/42b2831a6feaa48869fe13cec6b8ce22/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/42b2831a6feaa48869fe13cec6b8ce22/containers/kube-scheduler/2f3f916e\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"42b2831a6feaa48869fe13cec6b8ce22","kubernetes.io/config.hash":"42b2831a6feaa48869fe13cec6b8ce22","kubernetes.io/config.seen":"2021-08-13T21:12:24.637270975Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a3
7684","pid":3155,"status":"running","bundle":"/run/containers/storage/overlay-containers/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684/userdata","rootfs":"/var/lib/containers/storage/overlay/446e5699257f70b0b9da0c67f506429b0b14e53729e304a893ba9c604bab4f43/merged","created":"2021-08-13T21:12:52.320476359Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T21:12:49.648950310Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"eth0\",\"mac\":\"a6:4e:ac:71:f7:63\",\"sandbox\":\"/var/run/netns/fb111da7-851d-4044-aba7-9fd561393300\"}],\"ips\":[{\"version\":\"4\",\"interface\":0,\"address\":\"10.88.0.3/16\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podad347f93_2bcc_4e1c_b82c_66f4854c46d2.slice","io.kubernetes.cri-o.ContainerID":"707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be
7a535a37684","io.kubernetes.cri-o.ContainerName":"k8s_POD_metrics-server-7c784ccb57-mrklk_kube-system_ad347f93-2bcc-4e1c-b82c-66f4854c46d2_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:50.962337579Z","io.kubernetes.cri-o.HostName":"metrics-server-7c784ccb57-mrklk","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"metrics-server-7c784ccb57-mrklk","io.kubernetes.cri-o.Labels":"{\"pod-template-hash\":\"7c784ccb57\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"ad347f93-2bcc-4e1c-b82c-66f4854c46d2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"metrics-server-7c784ccb57-mrklk\",\"k8s-app\":\"metrics-server\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_metric
s-server-7c784ccb57-mrklk_ad347f93-2bcc-4e1c-b82c-66f4854c46d2/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"metrics-server-7c784ccb57-mrklk\",\"uid\":\"ad347f93-2bcc-4e1c-b82c-66f4854c46d2\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/446e5699257f70b0b9da0c67f506429b0b14e53729e304a893ba9c604bab4f43/merged","io.kubernetes.cri-o.Name":"k8s_metrics-server-7c784ccb57-mrklk_kube-system_ad347f93-2bcc-4e1c-b82c-66f4854c46d2_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a
97b7a4be7a535a37684","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684/userdata/shm","io.kubernetes.pod.name":"metrics-server-7c784ccb57-mrklk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ad347f93-2bcc-4e1c-b82c-66f4854c46d2","k8s-app":"metrics-server","kubernetes.io/config.seen":"2021-08-13T21:12:49.648950310Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"7c784ccb57"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","pid":2546,"status":"running","bundle":"/run/containers/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata","rootfs":"/var/lib/containers/storage/overlay/72c422432bfef659fd034d2281c2a3da5a0fb397368f3d6d4638551fd5f0e1d1/merged","created":"202
1-08-13T21:12:26.542798972Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T21:12:24.637246331Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"fb68b72f76f9aae78202c9c8c37cac6a\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podfb68b72f76f9aae78202c9c8c37cac6a.slice","io.kubernetes.cri-o.ContainerID":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.115325897Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/769393d983373f5fb
98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-newest-cni-20210813210910-30853","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"fb68b72f76f9aae78202c9c8c37cac6a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210813210910-30853\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813210910-30853_fb68b72f76f9aae78202c9c8c37cac6a/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-newest-cni-20210813210910-30853\",\"uid\":\"fb68b72f76f9aae78202c9c8c37cac6a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/72c422432bf
ef659fd034d2281c2a3da5a0fb397368f3d6d4638551fd5f0e1d1/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210813210910-30853","io.kubernetes.pod.n
amespace":"kube-system","io.kubernetes.pod.uid":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.hash":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.seen":"2021-08-13T21:12:24.637246331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4","pid":3041,"status":"running","bundle":"/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4/userdata","rootfs":"/var/lib/containers/storage/overlay/7feba88cedb45de776bdde939975b97e74236cb6172d5e4791d6489f20c11d17/merged","created":"2021-08-13T21:12:51.697423843Z","annotations":{"controller-revision-hash":"5cb9855ccb","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T21:12:49.648925585Z\",\"kubernetes.io/config.source\":\"api\"}
","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod4e36061f_0559_4cde_9b0a_b5cb328d0d76.slice","io.kubernetes.cri-o.ContainerID":"7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-qt9ld_kube-system_4e36061f-0559-4cde-9b0a-b5cb328d0d76_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:51.028420645Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-qt9ld","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"5cb9855ccb\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"4e36061f-0559-4cde-9b0a-b5cb328d0d76\",\"io.kubern
etes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-qt9ld\",\"pod-template-generation\":\"1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-qt9ld_4e36061f-0559-4cde-9b0a-b5cb328d0d76/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-qt9ld\",\"uid\":\"4e36061f-0559-4cde-9b0a-b5cb328d0d76\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7feba88cedb45de776bdde939975b97e74236cb6172d5e4791d6489f20c11d17/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-qt9ld_kube-system_4e36061f-0559-4cde-9b0a-b5cb328d0d76_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f
16335265ff4/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4/userdata/shm","io.kubernetes.pod.name":"kube-proxy-qt9ld","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4e36061f-0559-4cde-9b0a-b5cb328d0d76","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T21:12:49.648925585Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203","pid":2798,"status":"running","bundle":"/run/containers/storage/overlay-containers/81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203/userdata","ro
otfs":"/var/lib/containers/storage/overlay/d58910b4d534a32e58527d81f13936cd00c9146ec525bea088321525562bb354/merged","created":"2021-08-13T21:12:38.440159537Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f0960535","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f0960535\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:38.322805917Z","io.kubernetes.cri-o.Image":"k8s.
gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.5.0-0","io.kubernetes.cri-o.ImageRef":"0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"eab7e5e84ea4e6309241a6623f47ddd8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-newest-cni-20210813210910-30853_eab7e5e84ea4e6309241a6623f47ddd8/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d58910b4d534a32e58527d81f13936cd00c9146ec525bea088321525562bb354/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-newest-cni-20210813210910-30853_kube-system_eab7e5e84ea4e6309241a6623f47ddd8_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/stor
age/overlay-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995","io.kubernetes.cri-o.SandboxName":"k8s_etcd-newest-cni-20210813210910-30853_kube-system_eab7e5e84ea4e6309241a6623f47ddd8_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/eab7e5e84ea4e6309241a6623f47ddd8/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/eab7e5e84ea4e6309241a6623f47ddd8/containers/etcd/547f43dc\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/min
ikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"eab7e5e84ea4e6309241a6623f47ddd8","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.210:2379","kubernetes.io/config.hash":"eab7e5e84ea4e6309241a6623f47ddd8","kubernetes.io/config.seen":"2021-08-13T21:12:24.637273650Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef","pid":2885,"status":"running","bundle":"/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata","rootfs":"/var/lib/containers/storage/overlay/308b60610aaed7e5bbc28c79aeea97bf2d93b3cfb53afc80bb7b0360839654a1/merged","created":"2021-08-
13T21:12:50.599744507Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\
\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2021-08-13T21:12:49.648945632Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod5367404c_0e33_4f6c_9bb7_8fdb4ebbe4f6.slice","io.kubernetes.cri-o.ContainerID":"cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:50.36272595Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.
Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"integration-test\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.pod.uid\":\"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/308b60610aaed7e5bbc28c79aeea97bf2d93b3cfb53afc80bb7b0360839654a1/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-
o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\"
:\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T21:12:49.648945632Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669","pid":3337,"status":"running","bundle":"/run/containers/storage/overlay-containers/e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669/userdata","rootfs":"/var/lib/containers/storage/overlay/6ab4454f7adadb999ea980d11b6af71705eb9d04e430be6c0836c445554d2e43/merged","created":"2021-08-13T21:12:54.119158329Z
","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9c81cf57","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9c81cf57\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:53.836957347Z","io.kubernetes.cri-o.Image":"ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.22.0-rc.0","io.kubernetes.cri-o
.ImageRef":"ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-qt9ld\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4e36061f-0559-4cde-9b0a-b5cb328d0d76\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-qt9ld_4e36061f-0559-4cde-9b0a-b5cb328d0d76/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6ab4454f7adadb999ea980d11b6af71705eb9d04e430be6c0836c445554d2e43/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-qt9ld_kube-system_4e36061f-0559-4cde-9b0a-b5cb328d0d76_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4
cd005369f16335265ff4","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-qt9ld_kube-system_4e36061f-0559-4cde-9b0a-b5cb328d0d76_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4e36061f-0559-4cde-9b0a-b5cb328d0d76/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4e36061f-0559-4cde-9b0a-b5cb328d0d76/containers/kube-proxy/74117e11\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/4e36061f-0559-4cde-9b0a-b5cb328d0d76/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/va
r/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/4e36061f-0559-4cde-9b0a-b5cb328d0d76/volumes/kubernetes.io~projected/kube-api-access-jkstk\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-qt9ld","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4e36061f-0559-4cde-9b0a-b5cb328d0d76","kubernetes.io/config.seen":"2021-08-13T21:12:49.648925585Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78/userdata","rootfs":"/var/lib/containers/storage/overlay/1101f82f7089d4ed381b9555cde7daf15a1947e709d6f261bac6411fc806cd61/merged","created":"2021-08-13T21:12
:27.271783918Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3da1e13c","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3da1e13c\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:27.083240226Z","io.kubernetes.cri-o.Image":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-ma
nager:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fb68b72f76f9aae78202c9c8c37cac6a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813210910-30853_fb68b72f76f9aae78202c9c8c37cac6a/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1101f82f7089d4ed381b9555cde7daf15a1947e709d6f261bac6411fc806cd61/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_1","io.kubernetes.cri-o.ResolvPath":"/var/run/conta
iners/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fb68b72f76f9aae78202c9c8c37cac6a/containers/kube-controller-manager/1e416501\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fb68b72f76f9aae78202c9c8c37cac6a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},
{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.hash":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.seen":"2021-08-13T21:12:24.637246331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1","pid":3162,"status":"running","bundle":"/run/containers/storage/overlay-containers/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1/userdata","rootfs":"/var/lib/containers/storage/overlay/0dad6a9691e11420032a01df60f2eff71f5026a23e0740513a6cc1b3ce0c6df7/merged","created":"2021-08-13T21:12:52.319534114Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T21:12:49.648955325Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"eth0\",\"mac\":\"0e:e7:4d:91:98:06\",\"sandbox\":\"/var/run/netns/5b0194a8-d866-439d-9c6d-d11855fb7563\"}],\"ips\":[{\"version\":\"4\",\"interface\":0,\"address\":\"10.88.0.2/16\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod0d2dab50_994b_4314_8922_0e8a913a9b26.slice","io.kubern
etes.cri-o.ContainerID":"f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-78fcd69978-bc587_kube-system_0d2dab50-994b-4314-8922-0e8a913a9b26_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:50.773215997Z","io.kubernetes.cri-o.HostName":"coredns-78fcd69978-bc587","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-78fcd69978-bc587","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"0d2dab50-994b-4314-8922-0e8a913a9b26\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-78fcd69978-bc587\",\"pod-template-hash\":\"78fcd69978\",\"k8s-app\":\"kube-dns\"}","io.kubernetes.cri-
o.LogPath":"/var/log/pods/kube-system_coredns-78fcd69978-bc587_0d2dab50-994b-4314-8922-0e8a913a9b26/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-78fcd69978-bc587\",\"uid\":\"0d2dab50-994b-4314-8922-0e8a913a9b26\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0dad6a9691e11420032a01df60f2eff71f5026a23e0740513a6cc1b3ce0c6df7/merged","io.kubernetes.cri-o.Name":"k8s_coredns-78fcd69978-bc587_kube-system_0d2dab50-994b-4314-8922-0e8a913a9b26_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f2e7470876d5d92228129a
5e90504812846f6f58debda7a95d83c8e6c89c9fe1","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1/userdata/shm","io.kubernetes.pod.name":"coredns-78fcd69978-bc587","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"0d2dab50-994b-4314-8922-0e8a913a9b26","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T21:12:49.648955325Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"78fcd69978"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24","pid":2856,"status":"running","bundle":"/run/containers/storage/overlay-containers/f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24/userdata","rootfs":"/var/lib/containers/storage/overlay/9de1573f38bb7b42db09b42aaaf71354df3febdc978623a8e014dd9a6c1ebf60/merged","cre
ated":"2021-08-13T21:12:50.148091333Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3da1e13c","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3da1e13c\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:49.829023971Z","io.kubernetes.cri-o.Image":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.ImageName":"k8s.gc
r.io/kube-controller-manager:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fb68b72f76f9aae78202c9c8c37cac6a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813210910-30853_fb68b72f76f9aae78202c9c8c37cac6a/kube-controller-manager/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9de1573f38bb7b42db09b42aaaf71354df3febdc978623a8e014dd9a6c1ebf60/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_2","io.kubernetes.cri-o.Reso
lvPath":"/var/run/containers/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fb68b72f76f9aae78202c9c8c37cac6a/containers/kube-controller-manager/88978929\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fb68b72f76f9aae78202c9c8c37cac6a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.con
f\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.hash":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.seen":"2021-08-13T21:12:24.637246331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner"
:"root"},{"ociVersion":"1.0.2-dev","id":"f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882","pid":2554,"status":"running","bundle":"/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata","rootfs":"/var/lib/containers/storage/overlay/d361b4237f4427c910cd7643c35ad813f6d586f1e4cd4ee21aad180def39714d/merged","created":"2021-08-13T21:12:26.673763544Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"42b2831a6feaa48869fe13cec6b8ce22\",\"kubernetes.io/config.seen\":\"2021-08-13T21:12:24.637270975Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod42b2831a6feaa48869fe13cec6b8ce22.slice","io.kubernetes.cri-o.ContainerID":"f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-newest-cni-2021
0813210910-30853_kube-system_42b2831a6feaa48869fe13cec6b8ce22_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.161517056Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-newest-cni-20210813210910-30853","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"42b2831a6feaa48869fe13cec6b8ce22\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-newest-cni-20210813210910-30853\",\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210813210910-30853_42b2831a6feaa48869f
e13cec6b8ce22/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-newest-cni-20210813210910-30853\",\"uid\":\"42b2831a6feaa48869fe13cec6b8ce22\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d361b4237f4427c910cd7643c35ad813f6d586f1e4cd4ee21aad180def39714d/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-newest-cni-20210813210910-30853_kube-system_42b2831a6feaa48869fe13cec6b8ce22_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe21897
0c882","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"42b2831a6feaa48869fe13cec6b8ce22","kubernetes.io/config.hash":"42b2831a6feaa48869fe13cec6b8ce22","kubernetes.io/config.seen":"2021-08-13T21:12:24.637270975Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0813 21:12:57.060402   14630 cri.go:113] list returned 15 containers
	I0813 21:12:57.060419   14630 cri.go:116] container: {ID:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291 Status:running}
	I0813 21:12:57.060430   14630 cri.go:116] container: {ID:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe Status:running}
	I0813 21:12:57.060435   14630 cri.go:116] container: {ID:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995 Status:running}
	I0813 21:12:57.060441   14630 cri.go:118] skipping 248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995 - not in ps
	I0813 21:12:57.060446   14630 cri.go:116] container: {ID:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534 Status:running}
	I0813 21:12:57.060451   14630 cri.go:118] skipping 433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534 - not in ps
	I0813 21:12:57.060456   14630 cri.go:116] container: {ID:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71 Status:running}
	I0813 21:12:57.060464   14630 cri.go:116] container: {ID:707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684 Status:running}
	I0813 21:12:57.060471   14630 cri.go:118] skipping 707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684 - not in ps
	I0813 21:12:57.060483   14630 cri.go:116] container: {ID:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054 Status:running}
	I0813 21:12:57.060489   14630 cri.go:118] skipping 769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054 - not in ps
	I0813 21:12:57.060495   14630 cri.go:116] container: {ID:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4 Status:running}
	I0813 21:12:57.060505   14630 cri.go:118] skipping 7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4 - not in ps
	I0813 21:12:57.060510   14630 cri.go:116] container: {ID:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203 Status:running}
	I0813 21:12:57.060517   14630 cri.go:116] container: {ID:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef Status:running}
	I0813 21:12:57.060524   14630 cri.go:118] skipping cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef - not in ps
	I0813 21:12:57.060530   14630 cri.go:116] container: {ID:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669 Status:running}
	I0813 21:12:57.060536   14630 cri.go:116] container: {ID:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78 Status:stopped}
	I0813 21:12:57.060544   14630 cri.go:122] skipping {f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78 stopped}: state = "stopped", want "running"
	I0813 21:12:57.060587   14630 cri.go:116] container: {ID:f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1 Status:running}
	I0813 21:12:57.060595   14630 cri.go:118] skipping f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1 - not in ps
	I0813 21:12:57.060600   14630 cri.go:116] container: {ID:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24 Status:running}
	I0813 21:12:57.060608   14630 cri.go:116] container: {ID:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882 Status:running}
	I0813 21:12:57.060615   14630 cri.go:118] skipping f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882 - not in ps
	I0813 21:12:57.060665   14630 ssh_runner.go:149] Run: sudo runc pause 09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291
	I0813 21:12:57.080296   14630 ssh_runner.go:149] Run: sudo runc pause 09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291 21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe
	I0813 21:12:57.101053   14630 retry.go:31] will retry after 276.165072ms: runc: sudo runc pause 09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291 21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:12:57Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 21:12:57.377543   14630 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:12:57.389344   14630 pause.go:50] kubelet running: false
	I0813 21:12:57.389393   14630 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:12:57.583295   14630 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:12:57.583401   14630 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:12:57.697651   14630 cri.go:76] found id: "e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669"
	I0813 21:12:57.697678   14630 cri.go:76] found id: "21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe"
	I0813 21:12:57.697682   14630 cri.go:76] found id: "f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24"
	I0813 21:12:57.697686   14630 cri.go:76] found id: "81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203"
	I0813 21:12:57.697689   14630 cri.go:76] found id: "5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71"
	I0813 21:12:57.697693   14630 cri.go:76] found id: "f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78"
	I0813 21:12:57.697696   14630 cri.go:76] found id: "09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291"
	I0813 21:12:57.697700   14630 cri.go:76] found id: ""
	I0813 21:12:57.697755   14630 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 21:12:57.744669   14630 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291","pid":2608,"status":"paused","bundle":"/run/containers/storage/overlay-containers/09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291/userdata","rootfs":"/var/lib/containers/storage/overlay/113aacb9174c2faefce53ee11ef68088562e27ebc366e16860d30d835b7b5124/merged","created":"2021-08-13T21:12:27.070852027Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ffb6a91b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ffb6a91b\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminat
ionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.902176937Z","io.kubernetes.cri-o.Image":"b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"32688baa2c6a65d13ce71d2e854f4832\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210813210910-30853_32688baa2c6a65d13ce71d2e854f4832/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":
"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/113aacb9174c2faefce53ee11ef68088562e27ebc366e16860d30d835b7b5124/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-newest-cni-20210813210910-30853_kube-system_32688baa2c6a65d13ce71d2e854f4832_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-newest-cni-20210813210910-30853_kube-system_32688baa2c6a65d13ce71d2e854f4832_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/32688baa2c6a65d13c
e71d2e854f4832/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/32688baa2c6a65d13ce71d2e854f4832/containers/kube-apiserver/bd3cb872\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"32688baa2c6a65d13ce71d2e854f4832","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.210:8443","kubernetes.io/config.hash":"32688baa2c6a65d13ce71d2e854f4832","kubernetes.io/config.seen":"2021-08-13T21:12:24.637275916Z","kubernetes.io/config.source":"file","org.systemd.
property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe","pid":3275,"status":"running","bundle":"/run/containers/storage/overlay-containers/21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe/userdata","rootfs":"/var/lib/containers/storage/overlay/f423c225b9d8b2fa7970566f83c5a865812205115d572bce7bf004becf8b9891/merged","created":"2021-08-13T21:12:53.604162723Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6be87df7","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6be87df7\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessage
Path\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:53.320445828Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/storage-provisioner/0.lo
g","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f423c225b9d8b2fa7970566f83c5a865812205115d572bce7bf004becf8b9891/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\
"/var/lib/kubelet/pods/5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/containers/storage-provisioner/3b35aef2\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/volumes/kubernetes.io~projected/kube-api-access-pd5tc\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"
command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T21:12:49.648945632Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995","pid":2526,"status":"running","bundle":"/run/containers/storage/overlay-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata","rootfs":"/var/lib/containers/storage/overlay/bd6fd78678ecd2c6f2642788b570990a1f4d97c8623e3a6748bae10ad28611c0/merged","created":"2021-08-13T21:12:26.
383071519Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"eab7e5e84ea4e6309241a6623f47ddd8\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.210:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T21:12:24.637273650Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podeab7e5e84ea4e6309241a6623f47ddd8.slice","io.kubernetes.cri-o.ContainerID":"248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-newest-cni-20210813210910-30853_kube-system_eab7e5e84ea4e6309241a6623f47ddd8_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.146024584Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overla
y-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-newest-cni-20210813210910-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"eab7e5e84ea4e6309241a6623f47ddd8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-newest-cni-20210813210910-30853\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-newest-cni-20210813210910-30853_eab7e5e84ea4e6309241a6623f47ddd8/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-newest-cni-20210813210910-30853\",\"uid\":\"eab7e5e84ea4e6309241a6623f47ddd8\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/bd6fd78678ecd2c6f2642788b570990a1f4d97c8623e3a6748bae10ad28611c0/merged","io
.kubernetes.cri-o.Name":"k8s_etcd-newest-cni-20210813210910-30853_kube-system_eab7e5e84ea4e6309241a6623f47ddd8_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata/shm","io.kubernetes.pod.name":"etcd-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"eab7e5e84ea4e6309241a6623f47ddd8","kubeadm.kubernetes.
io/etcd.advertise-client-urls":"https://192.168.39.210:2379","kubernetes.io/config.hash":"eab7e5e84ea4e6309241a6623f47ddd8","kubernetes.io/config.seen":"2021-08-13T21:12:24.637273650Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534","pid":2534,"status":"running","bundle":"/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata","rootfs":"/var/lib/containers/storage/overlay/3a4a1b30f59ea92fe7dd7e1af5199e72e21588290b3f4e853f39c0287482d964/merged","created":"2021-08-13T21:12:26.433134857Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"32688baa2c6a65d13ce71d2e854f4832\",\"kubeadm.kubernetes.io/kube-a
piserver.advertise-address.endpoint\":\"192.168.39.210:8443\",\"kubernetes.io/config.seen\":\"2021-08-13T21:12:24.637275916Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod32688baa2c6a65d13ce71d2e854f4832.slice","io.kubernetes.cri-o.ContainerID":"433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-newest-cni-20210813210910-30853_kube-system_32688baa2c6a65d13ce71d2e854f4832_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.141226865Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-newest-cni-20210813210910-30853","io.kubernetes.cri-o.Labels":"{\"component\
":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"32688baa2c6a65d13ce71d2e854f4832\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-apiserver-newest-cni-20210813210910-30853\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210813210910-30853_32688baa2c6a65d13ce71d2e854f4832/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-newest-cni-20210813210910-30853\",\"uid\":\"32688baa2c6a65d13ce71d2e854f4832\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3a4a1b30f59ea92fe7dd7e1af5199e72e21588290b3f4e853f39c0287482d964/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-newest-cni-20210813210910-30853_kube-system_32688baa2c6a65d13ce71d2e854f4832_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\
":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"32688baa2c6a65d13ce71d2e854f4832","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.210:8443","kubernetes.io/config.hash":"32688baa2c6a65d13ce71d2e854f4832","kubernetes.io/config.seen":"2021-08-13T21:12:24.637275916Z","kubernet
es.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71","pid":2674,"status":"running","bundle":"/run/containers/storage/overlay-containers/5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71/userdata","rootfs":"/var/lib/containers/storage/overlay/314ab571072ff1ea8a263a2b5f70aa8a6c91666442868285983e696965d2429a/merged","created":"2021-08-13T21:12:27.629888101Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a0decd21","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a0decd21\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessa
gePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:27.413516541Z","io.kubernetes.cri-o.Image":"7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"42b2831a6feaa48869fe13cec6b8ce22\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210813210910-30853_42b2831a6feaa48869
fe13cec6b8ce22/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/314ab571072ff1ea8a263a2b5f70aa8a6c91666442868285983e696965d2429a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-newest-cni-20210813210910-30853_kube-system_42b2831a6feaa48869fe13cec6b8ce22_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-newest-cni-20210813210910-30853_kube-system_42b2831a6feaa48869fe13cec6b8ce22_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/
etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/42b2831a6feaa48869fe13cec6b8ce22/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/42b2831a6feaa48869fe13cec6b8ce22/containers/kube-scheduler/2f3f916e\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"42b2831a6feaa48869fe13cec6b8ce22","kubernetes.io/config.hash":"42b2831a6feaa48869fe13cec6b8ce22","kubernetes.io/config.seen":"2021-08-13T21:12:24.637270975Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37
684","pid":3155,"status":"running","bundle":"/run/containers/storage/overlay-containers/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684/userdata","rootfs":"/var/lib/containers/storage/overlay/446e5699257f70b0b9da0c67f506429b0b14e53729e304a893ba9c604bab4f43/merged","created":"2021-08-13T21:12:52.320476359Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T21:12:49.648950310Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"eth0\",\"mac\":\"a6:4e:ac:71:f7:63\",\"sandbox\":\"/var/run/netns/fb111da7-851d-4044-aba7-9fd561393300\"}],\"ips\":[{\"version\":\"4\",\"interface\":0,\"address\":\"10.88.0.3/16\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podad347f93_2bcc_4e1c_b82c_66f4854c46d2.slice","io.kubernetes.cri-o.ContainerID":"707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7
a535a37684","io.kubernetes.cri-o.ContainerName":"k8s_POD_metrics-server-7c784ccb57-mrklk_kube-system_ad347f93-2bcc-4e1c-b82c-66f4854c46d2_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:50.962337579Z","io.kubernetes.cri-o.HostName":"metrics-server-7c784ccb57-mrklk","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"metrics-server-7c784ccb57-mrklk","io.kubernetes.cri-o.Labels":"{\"pod-template-hash\":\"7c784ccb57\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"ad347f93-2bcc-4e1c-b82c-66f4854c46d2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"metrics-server-7c784ccb57-mrklk\",\"k8s-app\":\"metrics-server\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_metrics
-server-7c784ccb57-mrklk_ad347f93-2bcc-4e1c-b82c-66f4854c46d2/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"metrics-server-7c784ccb57-mrklk\",\"uid\":\"ad347f93-2bcc-4e1c-b82c-66f4854c46d2\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/446e5699257f70b0b9da0c67f506429b0b14e53729e304a893ba9c604bab4f43/merged","io.kubernetes.cri-o.Name":"k8s_metrics-server-7c784ccb57-mrklk_kube-system_ad347f93-2bcc-4e1c-b82c-66f4854c46d2_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a9
7b7a4be7a535a37684","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684/userdata/shm","io.kubernetes.pod.name":"metrics-server-7c784ccb57-mrklk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ad347f93-2bcc-4e1c-b82c-66f4854c46d2","k8s-app":"metrics-server","kubernetes.io/config.seen":"2021-08-13T21:12:49.648950310Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"7c784ccb57"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","pid":2546,"status":"running","bundle":"/run/containers/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata","rootfs":"/var/lib/containers/storage/overlay/72c422432bfef659fd034d2281c2a3da5a0fb397368f3d6d4638551fd5f0e1d1/merged","created":"2021
-08-13T21:12:26.542798972Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T21:12:24.637246331Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"fb68b72f76f9aae78202c9c8c37cac6a\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podfb68b72f76f9aae78202c9c8c37cac6a.slice","io.kubernetes.cri-o.ContainerID":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.115325897Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/769393d983373f5fb9
8b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-newest-cni-20210813210910-30853","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"fb68b72f76f9aae78202c9c8c37cac6a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210813210910-30853\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813210910-30853_fb68b72f76f9aae78202c9c8c37cac6a/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-newest-cni-20210813210910-30853\",\"uid\":\"fb68b72f76f9aae78202c9c8c37cac6a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/72c422432bfe
f659fd034d2281c2a3da5a0fb397368f3d6d4638551fd5f0e1d1/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210813210910-30853","io.kubernetes.pod.na
mespace":"kube-system","io.kubernetes.pod.uid":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.hash":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.seen":"2021-08-13T21:12:24.637246331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4","pid":3041,"status":"running","bundle":"/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4/userdata","rootfs":"/var/lib/containers/storage/overlay/7feba88cedb45de776bdde939975b97e74236cb6172d5e4791d6489f20c11d17/merged","created":"2021-08-13T21:12:51.697423843Z","annotations":{"controller-revision-hash":"5cb9855ccb","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T21:12:49.648925585Z\",\"kubernetes.io/config.source\":\"api\"}"
,"io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod4e36061f_0559_4cde_9b0a_b5cb328d0d76.slice","io.kubernetes.cri-o.ContainerID":"7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-qt9ld_kube-system_4e36061f-0559-4cde-9b0a-b5cb328d0d76_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:51.028420645Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-qt9ld","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"5cb9855ccb\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"4e36061f-0559-4cde-9b0a-b5cb328d0d76\",\"io.kuberne
tes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-qt9ld\",\"pod-template-generation\":\"1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-qt9ld_4e36061f-0559-4cde-9b0a-b5cb328d0d76/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-qt9ld\",\"uid\":\"4e36061f-0559-4cde-9b0a-b5cb328d0d76\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7feba88cedb45de776bdde939975b97e74236cb6172d5e4791d6489f20c11d17/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-qt9ld_kube-system_4e36061f-0559-4cde-9b0a-b5cb328d0d76_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f1
6335265ff4/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4/userdata/shm","io.kubernetes.pod.name":"kube-proxy-qt9ld","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4e36061f-0559-4cde-9b0a-b5cb328d0d76","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T21:12:49.648925585Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203","pid":2798,"status":"running","bundle":"/run/containers/storage/overlay-containers/81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203/userdata","roo
tfs":"/var/lib/containers/storage/overlay/d58910b4d534a32e58527d81f13936cd00c9146ec525bea088321525562bb354/merged","created":"2021-08-13T21:12:38.440159537Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f0960535","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f0960535\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:38.322805917Z","io.kubernetes.cri-o.Image":"k8s.g
cr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.5.0-0","io.kubernetes.cri-o.ImageRef":"0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"eab7e5e84ea4e6309241a6623f47ddd8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-newest-cni-20210813210910-30853_eab7e5e84ea4e6309241a6623f47ddd8/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d58910b4d534a32e58527d81f13936cd00c9146ec525bea088321525562bb354/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-newest-cni-20210813210910-30853_kube-system_eab7e5e84ea4e6309241a6623f47ddd8_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/stora
ge/overlay-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995","io.kubernetes.cri-o.SandboxName":"k8s_etcd-newest-cni-20210813210910-30853_kube-system_eab7e5e84ea4e6309241a6623f47ddd8_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/eab7e5e84ea4e6309241a6623f47ddd8/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/eab7e5e84ea4e6309241a6623f47ddd8/containers/etcd/547f43dc\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/mini
kube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"eab7e5e84ea4e6309241a6623f47ddd8","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.210:2379","kubernetes.io/config.hash":"eab7e5e84ea4e6309241a6623f47ddd8","kubernetes.io/config.seen":"2021-08-13T21:12:24.637273650Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef","pid":2885,"status":"running","bundle":"/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata","rootfs":"/var/lib/containers/storage/overlay/308b60610aaed7e5bbc28c79aeea97bf2d93b3cfb53afc80bb7b0360839654a1/merged","created":"2021-08-1
3T21:12:50.599744507Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\
"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2021-08-13T21:12:49.648945632Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod5367404c_0e33_4f6c_9bb7_8fdb4ebbe4f6.slice","io.kubernetes.cri-o.ContainerID":"cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:50.36272595Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.L
abels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"integration-test\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.pod.uid\":\"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/308b60610aaed7e5bbc28c79aeea97bf2d93b3cfb53afc80bb7b0360839654a1/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o
.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":
\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T21:12:49.648945632Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669","pid":3337,"status":"running","bundle":"/run/containers/storage/overlay-containers/e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669/userdata","rootfs":"/var/lib/containers/storage/overlay/6ab4454f7adadb999ea980d11b6af71705eb9d04e430be6c0836c445554d2e43/merged","created":"2021-08-13T21:12:54.119158329Z"
,"annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9c81cf57","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9c81cf57\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:53.836957347Z","io.kubernetes.cri-o.Image":"ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.22.0-rc.0","io.kubernetes.cri-o.
ImageRef":"ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-qt9ld\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4e36061f-0559-4cde-9b0a-b5cb328d0d76\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-qt9ld_4e36061f-0559-4cde-9b0a-b5cb328d0d76/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6ab4454f7adadb999ea980d11b6af71705eb9d04e430be6c0836c445554d2e43/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-qt9ld_kube-system_4e36061f-0559-4cde-9b0a-b5cb328d0d76_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4c
d005369f16335265ff4","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-qt9ld_kube-system_4e36061f-0559-4cde-9b0a-b5cb328d0d76_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4e36061f-0559-4cde-9b0a-b5cb328d0d76/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4e36061f-0559-4cde-9b0a-b5cb328d0d76/containers/kube-proxy/74117e11\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/4e36061f-0559-4cde-9b0a-b5cb328d0d76/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var
/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/4e36061f-0559-4cde-9b0a-b5cb328d0d76/volumes/kubernetes.io~projected/kube-api-access-jkstk\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-qt9ld","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4e36061f-0559-4cde-9b0a-b5cb328d0d76","kubernetes.io/config.seen":"2021-08-13T21:12:49.648925585Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78/userdata","rootfs":"/var/lib/containers/storage/overlay/1101f82f7089d4ed381b9555cde7daf15a1947e709d6f261bac6411fc806cd61/merged","created":"2021-08-13T21:12:
27.271783918Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3da1e13c","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3da1e13c\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:27.083240226Z","io.kubernetes.cri-o.Image":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-man
ager:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fb68b72f76f9aae78202c9c8c37cac6a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813210910-30853_fb68b72f76f9aae78202c9c8c37cac6a/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1101f82f7089d4ed381b9555cde7daf15a1947e709d6f261bac6411fc806cd61/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_1","io.kubernetes.cri-o.ResolvPath":"/var/run/contai
ners/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fb68b72f76f9aae78202c9c8c37cac6a/containers/kube-controller-manager/1e416501\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fb68b72f76f9aae78202c9c8c37cac6a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{
\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.hash":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.seen":"2021-08-13T21:12:24.637246331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"
1.0.2-dev","id":"f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1","pid":3162,"status":"running","bundle":"/run/containers/storage/overlay-containers/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1/userdata","rootfs":"/var/lib/containers/storage/overlay/0dad6a9691e11420032a01df60f2eff71f5026a23e0740513a6cc1b3ce0c6df7/merged","created":"2021-08-13T21:12:52.319534114Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T21:12:49.648955325Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"eth0\",\"mac\":\"0e:e7:4d:91:98:06\",\"sandbox\":\"/var/run/netns/5b0194a8-d866-439d-9c6d-d11855fb7563\"}],\"ips\":[{\"version\":\"4\",\"interface\":0,\"address\":\"10.88.0.2/16\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod0d2dab50_994b_4314_8922_0e8a913a9b26.slice","io.kuberne
tes.cri-o.ContainerID":"f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-78fcd69978-bc587_kube-system_0d2dab50-994b-4314-8922-0e8a913a9b26_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:50.773215997Z","io.kubernetes.cri-o.HostName":"coredns-78fcd69978-bc587","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-78fcd69978-bc587","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"0d2dab50-994b-4314-8922-0e8a913a9b26\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-78fcd69978-bc587\",\"pod-template-hash\":\"78fcd69978\",\"k8s-app\":\"kube-dns\"}","io.kubernetes.cri-o
.LogPath":"/var/log/pods/kube-system_coredns-78fcd69978-bc587_0d2dab50-994b-4314-8922-0e8a913a9b26/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-78fcd69978-bc587\",\"uid\":\"0d2dab50-994b-4314-8922-0e8a913a9b26\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0dad6a9691e11420032a01df60f2eff71f5026a23e0740513a6cc1b3ce0c6df7/merged","io.kubernetes.cri-o.Name":"k8s_coredns-78fcd69978-bc587_kube-system_0d2dab50-994b-4314-8922-0e8a913a9b26_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f2e7470876d5d92228129a5
e90504812846f6f58debda7a95d83c8e6c89c9fe1","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1/userdata/shm","io.kubernetes.pod.name":"coredns-78fcd69978-bc587","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"0d2dab50-994b-4314-8922-0e8a913a9b26","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T21:12:49.648955325Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"78fcd69978"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24","pid":2856,"status":"running","bundle":"/run/containers/storage/overlay-containers/f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24/userdata","rootfs":"/var/lib/containers/storage/overlay/9de1573f38bb7b42db09b42aaaf71354df3febdc978623a8e014dd9a6c1ebf60/merged","crea
ted":"2021-08-13T21:12:50.148091333Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3da1e13c","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3da1e13c\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:49.829023971Z","io.kubernetes.cri-o.Image":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.ImageName":"k8s.gcr
.io/kube-controller-manager:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fb68b72f76f9aae78202c9c8c37cac6a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813210910-30853_fb68b72f76f9aae78202c9c8c37cac6a/kube-controller-manager/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9de1573f38bb7b42db09b42aaaf71354df3febdc978623a8e014dd9a6c1ebf60/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_2","io.kubernetes.cri-o.Resol
vPath":"/var/run/containers/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fb68b72f76f9aae78202c9c8c37cac6a/containers/kube-controller-manager/88978929\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fb68b72f76f9aae78202c9c8c37cac6a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf
\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.hash":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.seen":"2021-08-13T21:12:24.637246331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":
"root"},{"ociVersion":"1.0.2-dev","id":"f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882","pid":2554,"status":"running","bundle":"/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata","rootfs":"/var/lib/containers/storage/overlay/d361b4237f4427c910cd7643c35ad813f6d586f1e4cd4ee21aad180def39714d/merged","created":"2021-08-13T21:12:26.673763544Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"42b2831a6feaa48869fe13cec6b8ce22\",\"kubernetes.io/config.seen\":\"2021-08-13T21:12:24.637270975Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod42b2831a6feaa48869fe13cec6b8ce22.slice","io.kubernetes.cri-o.ContainerID":"f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-newest-cni-20210
813210910-30853_kube-system_42b2831a6feaa48869fe13cec6b8ce22_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.161517056Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-newest-cni-20210813210910-30853","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"42b2831a6feaa48869fe13cec6b8ce22\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-newest-cni-20210813210910-30853\",\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210813210910-30853_42b2831a6feaa48869fe
13cec6b8ce22/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-newest-cni-20210813210910-30853\",\"uid\":\"42b2831a6feaa48869fe13cec6b8ce22\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d361b4237f4427c910cd7643c35ad813f6d586f1e4cd4ee21aad180def39714d/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-newest-cni-20210813210910-30853_kube-system_42b2831a6feaa48869fe13cec6b8ce22_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970
c882","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"42b2831a6feaa48869fe13cec6b8ce22","kubernetes.io/config.hash":"42b2831a6feaa48869fe13cec6b8ce22","kubernetes.io/config.seen":"2021-08-13T21:12:24.637270975Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0813 21:12:57.745358   14630 cri.go:113] list returned 15 containers
	I0813 21:12:57.745371   14630 cri.go:116] container: {ID:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291 Status:paused}
	I0813 21:12:57.745381   14630 cri.go:122] skipping {09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291 paused}: state = "paused", want "running"
	I0813 21:12:57.745391   14630 cri.go:116] container: {ID:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe Status:running}
	I0813 21:12:57.745395   14630 cri.go:116] container: {ID:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995 Status:running}
	I0813 21:12:57.745401   14630 cri.go:118] skipping 248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995 - not in ps
	I0813 21:12:57.745406   14630 cri.go:116] container: {ID:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534 Status:running}
	I0813 21:12:57.745411   14630 cri.go:118] skipping 433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534 - not in ps
	I0813 21:12:57.745415   14630 cri.go:116] container: {ID:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71 Status:running}
	I0813 21:12:57.745420   14630 cri.go:116] container: {ID:707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684 Status:running}
	I0813 21:12:57.745431   14630 cri.go:118] skipping 707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684 - not in ps
	I0813 21:12:57.745458   14630 cri.go:116] container: {ID:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054 Status:running}
	I0813 21:12:57.745465   14630 cri.go:118] skipping 769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054 - not in ps
	I0813 21:12:57.745470   14630 cri.go:116] container: {ID:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4 Status:running}
	I0813 21:12:57.745480   14630 cri.go:118] skipping 7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4 - not in ps
	I0813 21:12:57.745485   14630 cri.go:116] container: {ID:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203 Status:running}
	I0813 21:12:57.745492   14630 cri.go:116] container: {ID:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef Status:running}
	I0813 21:12:57.745500   14630 cri.go:118] skipping cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef - not in ps
	I0813 21:12:57.745505   14630 cri.go:116] container: {ID:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669 Status:running}
	I0813 21:12:57.745519   14630 cri.go:116] container: {ID:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78 Status:stopped}
	I0813 21:12:57.745529   14630 cri.go:122] skipping {f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78 stopped}: state = "stopped", want "running"
	I0813 21:12:57.745538   14630 cri.go:116] container: {ID:f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1 Status:running}
	I0813 21:12:57.745544   14630 cri.go:118] skipping f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1 - not in ps
	I0813 21:12:57.745552   14630 cri.go:116] container: {ID:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24 Status:running}
	I0813 21:12:57.745559   14630 cri.go:116] container: {ID:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882 Status:running}
	I0813 21:12:57.745568   14630 cri.go:118] skipping f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882 - not in ps
	I0813 21:12:57.745619   14630 ssh_runner.go:149] Run: sudo runc pause 21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe
	I0813 21:12:57.765214   14630 ssh_runner.go:149] Run: sudo runc pause 21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe 5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71
	I0813 21:12:57.786811   14630 retry.go:31] will retry after 540.190908ms: runc: sudo runc pause 21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe 5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:12:57Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 21:12:58.327551   14630 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:12:58.340258   14630 pause.go:50] kubelet running: false
	I0813 21:12:58.340323   14630 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:12:58.535510   14630 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:12:58.535603   14630 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:12:58.655301   14630 cri.go:76] found id: "e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669"
	I0813 21:12:58.655350   14630 cri.go:76] found id: "21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe"
	I0813 21:12:58.655357   14630 cri.go:76] found id: "f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24"
	I0813 21:12:58.655363   14630 cri.go:76] found id: "81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203"
	I0813 21:12:58.655368   14630 cri.go:76] found id: "5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71"
	I0813 21:12:58.655373   14630 cri.go:76] found id: "f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78"
	I0813 21:12:58.655378   14630 cri.go:76] found id: "09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291"
	I0813 21:12:58.655384   14630 cri.go:76] found id: ""
	I0813 21:12:58.655436   14630 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 21:12:58.698809   14630 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291","pid":2608,"status":"paused","bundle":"/run/containers/storage/overlay-containers/09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291/userdata","rootfs":"/var/lib/containers/storage/overlay/113aacb9174c2faefce53ee11ef68088562e27ebc366e16860d30d835b7b5124/merged","created":"2021-08-13T21:12:27.070852027Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ffb6a91b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ffb6a91b\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminat
ionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.902176937Z","io.kubernetes.cri-o.Image":"b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"32688baa2c6a65d13ce71d2e854f4832\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210813210910-30853_32688baa2c6a65d13ce71d2e854f4832/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":
"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/113aacb9174c2faefce53ee11ef68088562e27ebc366e16860d30d835b7b5124/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-newest-cni-20210813210910-30853_kube-system_32688baa2c6a65d13ce71d2e854f4832_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-newest-cni-20210813210910-30853_kube-system_32688baa2c6a65d13ce71d2e854f4832_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/32688baa2c6a65d13c
e71d2e854f4832/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/32688baa2c6a65d13ce71d2e854f4832/containers/kube-apiserver/bd3cb872\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"32688baa2c6a65d13ce71d2e854f4832","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.210:8443","kubernetes.io/config.hash":"32688baa2c6a65d13ce71d2e854f4832","kubernetes.io/config.seen":"2021-08-13T21:12:24.637275916Z","kubernetes.io/config.source":"file","org.systemd.
property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe","pid":3275,"status":"paused","bundle":"/run/containers/storage/overlay-containers/21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe/userdata","rootfs":"/var/lib/containers/storage/overlay/f423c225b9d8b2fa7970566f83c5a865812205115d572bce7bf004becf8b9891/merged","created":"2021-08-13T21:12:53.604162723Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6be87df7","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6be87df7\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessageP
ath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:53.320445828Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/storage-provisioner/0.log
","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f423c225b9d8b2fa7970566f83c5a865812205115d572bce7bf004becf8b9891/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"
/var/lib/kubelet/pods/5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/containers/storage-provisioner/3b35aef2\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/volumes/kubernetes.io~projected/kube-api-access-pd5tc\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"c
ommand\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T21:12:49.648945632Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995","pid":2526,"status":"running","bundle":"/run/containers/storage/overlay-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata","rootfs":"/var/lib/containers/storage/overlay/bd6fd78678ecd2c6f2642788b570990a1f4d97c8623e3a6748bae10ad28611c0/merged","created":"2021-08-13T21:12:26.3
83071519Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"eab7e5e84ea4e6309241a6623f47ddd8\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.210:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T21:12:24.637273650Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podeab7e5e84ea4e6309241a6623f47ddd8.slice","io.kubernetes.cri-o.ContainerID":"248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-newest-cni-20210813210910-30853_kube-system_eab7e5e84ea4e6309241a6623f47ddd8_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.146024584Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay
-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-newest-cni-20210813210910-30853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"eab7e5e84ea4e6309241a6623f47ddd8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-newest-cni-20210813210910-30853\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-newest-cni-20210813210910-30853_eab7e5e84ea4e6309241a6623f47ddd8/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-newest-cni-20210813210910-30853\",\"uid\":\"eab7e5e84ea4e6309241a6623f47ddd8\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/bd6fd78678ecd2c6f2642788b570990a1f4d97c8623e3a6748bae10ad28611c0/merged","io.
kubernetes.cri-o.Name":"k8s_etcd-newest-cni-20210813210910-30853_kube-system_eab7e5e84ea4e6309241a6623f47ddd8_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata/shm","io.kubernetes.pod.name":"etcd-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"eab7e5e84ea4e6309241a6623f47ddd8","kubeadm.kubernetes.i
o/etcd.advertise-client-urls":"https://192.168.39.210:2379","kubernetes.io/config.hash":"eab7e5e84ea4e6309241a6623f47ddd8","kubernetes.io/config.seen":"2021-08-13T21:12:24.637273650Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534","pid":2534,"status":"running","bundle":"/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata","rootfs":"/var/lib/containers/storage/overlay/3a4a1b30f59ea92fe7dd7e1af5199e72e21588290b3f4e853f39c0287482d964/merged","created":"2021-08-13T21:12:26.433134857Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"32688baa2c6a65d13ce71d2e854f4832\",\"kubeadm.kubernetes.io/kube-ap
iserver.advertise-address.endpoint\":\"192.168.39.210:8443\",\"kubernetes.io/config.seen\":\"2021-08-13T21:12:24.637275916Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod32688baa2c6a65d13ce71d2e854f4832.slice","io.kubernetes.cri-o.ContainerID":"433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-newest-cni-20210813210910-30853_kube-system_32688baa2c6a65d13ce71d2e854f4832_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.141226865Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-newest-cni-20210813210910-30853","io.kubernetes.cri-o.Labels":"{\"component\"
:\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"32688baa2c6a65d13ce71d2e854f4832\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-apiserver-newest-cni-20210813210910-30853\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210813210910-30853_32688baa2c6a65d13ce71d2e854f4832/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-newest-cni-20210813210910-30853\",\"uid\":\"32688baa2c6a65d13ce71d2e854f4832\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3a4a1b30f59ea92fe7dd7e1af5199e72e21588290b3f4e853f39c0287482d964/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-newest-cni-20210813210910-30853_kube-system_32688baa2c6a65d13ce71d2e854f4832_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\"
:2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"32688baa2c6a65d13ce71d2e854f4832","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.210:8443","kubernetes.io/config.hash":"32688baa2c6a65d13ce71d2e854f4832","kubernetes.io/config.seen":"2021-08-13T21:12:24.637275916Z","kubernete
s.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71","pid":2674,"status":"running","bundle":"/run/containers/storage/overlay-containers/5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71/userdata","rootfs":"/var/lib/containers/storage/overlay/314ab571072ff1ea8a263a2b5f70aa8a6c91666442868285983e696965d2429a/merged","created":"2021-08-13T21:12:27.629888101Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a0decd21","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a0decd21\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessag
ePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:27.413516541Z","io.kubernetes.cri-o.Image":"7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"42b2831a6feaa48869fe13cec6b8ce22\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210813210910-30853_42b2831a6feaa48869f
e13cec6b8ce22/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/314ab571072ff1ea8a263a2b5f70aa8a6c91666442868285983e696965d2429a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-newest-cni-20210813210910-30853_kube-system_42b2831a6feaa48869fe13cec6b8ce22_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-newest-cni-20210813210910-30853_kube-system_42b2831a6feaa48869fe13cec6b8ce22_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/e
tc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/42b2831a6feaa48869fe13cec6b8ce22/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/42b2831a6feaa48869fe13cec6b8ce22/containers/kube-scheduler/2f3f916e\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"42b2831a6feaa48869fe13cec6b8ce22","kubernetes.io/config.hash":"42b2831a6feaa48869fe13cec6b8ce22","kubernetes.io/config.seen":"2021-08-13T21:12:24.637270975Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a376
84","pid":3155,"status":"running","bundle":"/run/containers/storage/overlay-containers/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684/userdata","rootfs":"/var/lib/containers/storage/overlay/446e5699257f70b0b9da0c67f506429b0b14e53729e304a893ba9c604bab4f43/merged","created":"2021-08-13T21:12:52.320476359Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T21:12:49.648950310Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"eth0\",\"mac\":\"a6:4e:ac:71:f7:63\",\"sandbox\":\"/var/run/netns/fb111da7-851d-4044-aba7-9fd561393300\"}],\"ips\":[{\"version\":\"4\",\"interface\":0,\"address\":\"10.88.0.3/16\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podad347f93_2bcc_4e1c_b82c_66f4854c46d2.slice","io.kubernetes.cri-o.ContainerID":"707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a
535a37684","io.kubernetes.cri-o.ContainerName":"k8s_POD_metrics-server-7c784ccb57-mrklk_kube-system_ad347f93-2bcc-4e1c-b82c-66f4854c46d2_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:50.962337579Z","io.kubernetes.cri-o.HostName":"metrics-server-7c784ccb57-mrklk","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"metrics-server-7c784ccb57-mrklk","io.kubernetes.cri-o.Labels":"{\"pod-template-hash\":\"7c784ccb57\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"ad347f93-2bcc-4e1c-b82c-66f4854c46d2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"metrics-server-7c784ccb57-mrklk\",\"k8s-app\":\"metrics-server\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_metrics-
server-7c784ccb57-mrklk_ad347f93-2bcc-4e1c-b82c-66f4854c46d2/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"metrics-server-7c784ccb57-mrklk\",\"uid\":\"ad347f93-2bcc-4e1c-b82c-66f4854c46d2\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/446e5699257f70b0b9da0c67f506429b0b14e53729e304a893ba9c604bab4f43/merged","io.kubernetes.cri-o.Name":"k8s_metrics-server-7c784ccb57-mrklk_kube-system_ad347f93-2bcc-4e1c-b82c-66f4854c46d2_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97
b7a4be7a535a37684","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684/userdata/shm","io.kubernetes.pod.name":"metrics-server-7c784ccb57-mrklk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ad347f93-2bcc-4e1c-b82c-66f4854c46d2","k8s-app":"metrics-server","kubernetes.io/config.seen":"2021-08-13T21:12:49.648950310Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"7c784ccb57"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","pid":2546,"status":"running","bundle":"/run/containers/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata","rootfs":"/var/lib/containers/storage/overlay/72c422432bfef659fd034d2281c2a3da5a0fb397368f3d6d4638551fd5f0e1d1/merged","created":"2021-
08-13T21:12:26.542798972Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T21:12:24.637246331Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"fb68b72f76f9aae78202c9c8c37cac6a\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podfb68b72f76f9aae78202c9c8c37cac6a.slice","io.kubernetes.cri-o.ContainerID":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.115325897Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/769393d983373f5fb98
b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-newest-cni-20210813210910-30853","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"fb68b72f76f9aae78202c9c8c37cac6a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210813210910-30853\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813210910-30853_fb68b72f76f9aae78202c9c8c37cac6a/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-newest-cni-20210813210910-30853\",\"uid\":\"fb68b72f76f9aae78202c9c8c37cac6a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/72c422432bfef
659fd034d2281c2a3da5a0fb397368f3d6d4638551fd5f0e1d1/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210813210910-30853","io.kubernetes.pod.nam
espace":"kube-system","io.kubernetes.pod.uid":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.hash":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.seen":"2021-08-13T21:12:24.637246331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4","pid":3041,"status":"running","bundle":"/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4/userdata","rootfs":"/var/lib/containers/storage/overlay/7feba88cedb45de776bdde939975b97e74236cb6172d5e4791d6489f20c11d17/merged","created":"2021-08-13T21:12:51.697423843Z","annotations":{"controller-revision-hash":"5cb9855ccb","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T21:12:49.648925585Z\",\"kubernetes.io/config.source\":\"api\"}",
"io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod4e36061f_0559_4cde_9b0a_b5cb328d0d76.slice","io.kubernetes.cri-o.ContainerID":"7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-qt9ld_kube-system_4e36061f-0559-4cde-9b0a-b5cb328d0d76_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:51.028420645Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-qt9ld","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"5cb9855ccb\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"4e36061f-0559-4cde-9b0a-b5cb328d0d76\",\"io.kubernet
es.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-qt9ld\",\"pod-template-generation\":\"1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-qt9ld_4e36061f-0559-4cde-9b0a-b5cb328d0d76/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-qt9ld\",\"uid\":\"4e36061f-0559-4cde-9b0a-b5cb328d0d76\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7feba88cedb45de776bdde939975b97e74236cb6172d5e4791d6489f20c11d17/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-qt9ld_kube-system_4e36061f-0559-4cde-9b0a-b5cb328d0d76_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16
335265ff4/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4/userdata/shm","io.kubernetes.pod.name":"kube-proxy-qt9ld","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4e36061f-0559-4cde-9b0a-b5cb328d0d76","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T21:12:49.648925585Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203","pid":2798,"status":"running","bundle":"/run/containers/storage/overlay-containers/81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203/userdata","root
fs":"/var/lib/containers/storage/overlay/d58910b4d534a32e58527d81f13936cd00c9146ec525bea088321525562bb354/merged","created":"2021-08-13T21:12:38.440159537Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f0960535","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f0960535\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:38.322805917Z","io.kubernetes.cri-o.Image":"k8s.gc
r.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.5.0-0","io.kubernetes.cri-o.ImageRef":"0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"eab7e5e84ea4e6309241a6623f47ddd8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-newest-cni-20210813210910-30853_eab7e5e84ea4e6309241a6623f47ddd8/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d58910b4d534a32e58527d81f13936cd00c9146ec525bea088321525562bb354/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-newest-cni-20210813210910-30853_kube-system_eab7e5e84ea4e6309241a6623f47ddd8_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storag
e/overlay-containers/248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995","io.kubernetes.cri-o.SandboxName":"k8s_etcd-newest-cni-20210813210910-30853_kube-system_eab7e5e84ea4e6309241a6623f47ddd8_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/eab7e5e84ea4e6309241a6623f47ddd8/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/eab7e5e84ea4e6309241a6623f47ddd8/containers/etcd/547f43dc\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minik
ube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"eab7e5e84ea4e6309241a6623f47ddd8","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.210:2379","kubernetes.io/config.hash":"eab7e5e84ea4e6309241a6623f47ddd8","kubernetes.io/config.seen":"2021-08-13T21:12:24.637273650Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef","pid":2885,"status":"running","bundle":"/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata","rootfs":"/var/lib/containers/storage/overlay/308b60610aaed7e5bbc28c79aeea97bf2d93b3cfb53afc80bb7b0360839654a1/merged","created":"2021-08-13
T21:12:50.599744507Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"
path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2021-08-13T21:12:49.648945632Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod5367404c_0e33_4f6c_9bb7_8fdb4ebbe4f6.slice","io.kubernetes.cri-o.ContainerID":"cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:50.36272595Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.La
bels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"integration-test\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.pod.uid\":\"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/308b60610aaed7e5bbc28c79aeea97bf2d93b3cfb53afc80bb7b0360839654a1/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.
PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\
"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T21:12:49.648945632Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669","pid":3337,"status":"running","bundle":"/run/containers/storage/overlay-containers/e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669/userdata","rootfs":"/var/lib/containers/storage/overlay/6ab4454f7adadb999ea980d11b6af71705eb9d04e430be6c0836c445554d2e43/merged","created":"2021-08-13T21:12:54.119158329Z",
"annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9c81cf57","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9c81cf57\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:53.836957347Z","io.kubernetes.cri-o.Image":"ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.22.0-rc.0","io.kubernetes.cri-o.I
mageRef":"ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-qt9ld\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4e36061f-0559-4cde-9b0a-b5cb328d0d76\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-qt9ld_4e36061f-0559-4cde-9b0a-b5cb328d0d76/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6ab4454f7adadb999ea980d11b6af71705eb9d04e430be6c0836c445554d2e43/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-qt9ld_kube-system_4e36061f-0559-4cde-9b0a-b5cb328d0d76_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd
005369f16335265ff4","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-qt9ld_kube-system_4e36061f-0559-4cde-9b0a-b5cb328d0d76_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4e36061f-0559-4cde-9b0a-b5cb328d0d76/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4e36061f-0559-4cde-9b0a-b5cb328d0d76/containers/kube-proxy/74117e11\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/4e36061f-0559-4cde-9b0a-b5cb328d0d76/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/
run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/4e36061f-0559-4cde-9b0a-b5cb328d0d76/volumes/kubernetes.io~projected/kube-api-access-jkstk\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-qt9ld","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4e36061f-0559-4cde-9b0a-b5cb328d0d76","kubernetes.io/config.seen":"2021-08-13T21:12:49.648925585Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78/userdata","rootfs":"/var/lib/containers/storage/overlay/1101f82f7089d4ed381b9555cde7daf15a1947e709d6f261bac6411fc806cd61/merged","created":"2021-08-13T21:12:2
7.271783918Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3da1e13c","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3da1e13c\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:27.083240226Z","io.kubernetes.cri-o.Image":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-mana
ger:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fb68b72f76f9aae78202c9c8c37cac6a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813210910-30853_fb68b72f76f9aae78202c9c8c37cac6a/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1101f82f7089d4ed381b9555cde7daf15a1947e709d6f261bac6411fc806cd61/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_1","io.kubernetes.cri-o.ResolvPath":"/var/run/contain
ers/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fb68b72f76f9aae78202c9c8c37cac6a/containers/kube-controller-manager/1e416501\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fb68b72f76f9aae78202c9c8c37cac6a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\
"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.hash":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.seen":"2021-08-13T21:12:24.637246331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1
.0.2-dev","id":"f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1","pid":3162,"status":"running","bundle":"/run/containers/storage/overlay-containers/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1/userdata","rootfs":"/var/lib/containers/storage/overlay/0dad6a9691e11420032a01df60f2eff71f5026a23e0740513a6cc1b3ce0c6df7/merged","created":"2021-08-13T21:12:52.319534114Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T21:12:49.648955325Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"eth0\",\"mac\":\"0e:e7:4d:91:98:06\",\"sandbox\":\"/var/run/netns/5b0194a8-d866-439d-9c6d-d11855fb7563\"}],\"ips\":[{\"version\":\"4\",\"interface\":0,\"address\":\"10.88.0.2/16\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod0d2dab50_994b_4314_8922_0e8a913a9b26.slice","io.kubernet
es.cri-o.ContainerID":"f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-78fcd69978-bc587_kube-system_0d2dab50-994b-4314-8922-0e8a913a9b26_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:50.773215997Z","io.kubernetes.cri-o.HostName":"coredns-78fcd69978-bc587","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-78fcd69978-bc587","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"0d2dab50-994b-4314-8922-0e8a913a9b26\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-78fcd69978-bc587\",\"pod-template-hash\":\"78fcd69978\",\"k8s-app\":\"kube-dns\"}","io.kubernetes.cri-o.
LogPath":"/var/log/pods/kube-system_coredns-78fcd69978-bc587_0d2dab50-994b-4314-8922-0e8a913a9b26/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-78fcd69978-bc587\",\"uid\":\"0d2dab50-994b-4314-8922-0e8a913a9b26\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0dad6a9691e11420032a01df60f2eff71f5026a23e0740513a6cc1b3ce0c6df7/merged","io.kubernetes.cri-o.Name":"k8s_coredns-78fcd69978-bc587_kube-system_0d2dab50-994b-4314-8922-0e8a913a9b26_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f2e7470876d5d92228129a5e
90504812846f6f58debda7a95d83c8e6c89c9fe1","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1/userdata/shm","io.kubernetes.pod.name":"coredns-78fcd69978-bc587","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"0d2dab50-994b-4314-8922-0e8a913a9b26","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T21:12:49.648955325Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"78fcd69978"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24","pid":2856,"status":"running","bundle":"/run/containers/storage/overlay-containers/f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24/userdata","rootfs":"/var/lib/containers/storage/overlay/9de1573f38bb7b42db09b42aaaf71354df3febdc978623a8e014dd9a6c1ebf60/merged","creat
ed":"2021-08-13T21:12:50.148091333Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3da1e13c","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3da1e13c\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T21:12:49.829023971Z","io.kubernetes.cri-o.Image":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.ImageName":"k8s.gcr.
io/kube-controller-manager:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210813210910-30853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fb68b72f76f9aae78202c9c8c37cac6a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813210910-30853_fb68b72f76f9aae78202c9c8c37cac6a/kube-controller-manager/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9de1573f38bb7b42db09b42aaaf71354df3febdc978623a8e014dd9a6c1ebf60/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_2","io.kubernetes.cri-o.Resolv
Path":"/var/run/containers/storage/overlay-containers/769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-newest-cni-20210813210910-30853_kube-system_fb68b72f76f9aae78202c9c8c37cac6a_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fb68b72f76f9aae78202c9c8c37cac6a/containers/kube-controller-manager/88978929\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fb68b72f76f9aae78202c9c8c37cac6a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\
",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.hash":"fb68b72f76f9aae78202c9c8c37cac6a","kubernetes.io/config.seen":"2021-08-13T21:12:24.637246331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"
root"},{"ociVersion":"1.0.2-dev","id":"f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882","pid":2554,"status":"running","bundle":"/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata","rootfs":"/var/lib/containers/storage/overlay/d361b4237f4427c910cd7643c35ad813f6d586f1e4cd4ee21aad180def39714d/merged","created":"2021-08-13T21:12:26.673763544Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"42b2831a6feaa48869fe13cec6b8ce22\",\"kubernetes.io/config.seen\":\"2021-08-13T21:12:24.637270975Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod42b2831a6feaa48869fe13cec6b8ce22.slice","io.kubernetes.cri-o.ContainerID":"f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-newest-cni-202108
13210910-30853_kube-system_42b2831a6feaa48869fe13cec6b8ce22_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T21:12:26.161517056Z","io.kubernetes.cri-o.HostName":"newest-cni-20210813210910-30853","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-newest-cni-20210813210910-30853","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"42b2831a6feaa48869fe13cec6b8ce22\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-newest-cni-20210813210910-30853\",\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210813210910-30853_42b2831a6feaa48869fe1
3cec6b8ce22/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-newest-cni-20210813210910-30853\",\"uid\":\"42b2831a6feaa48869fe13cec6b8ce22\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d361b4237f4427c910cd7643c35ad813f6d586f1e4cd4ee21aad180def39714d/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-newest-cni-20210813210910-30853_kube-system_42b2831a6feaa48869fe13cec6b8ce22_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c
882","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-newest-cni-20210813210910-30853","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"42b2831a6feaa48869fe13cec6b8ce22","kubernetes.io/config.hash":"42b2831a6feaa48869fe13cec6b8ce22","kubernetes.io/config.seen":"2021-08-13T21:12:24.637270975Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0813 21:12:58.699578   14630 cri.go:113] list returned 15 containers
	I0813 21:12:58.699590   14630 cri.go:116] container: {ID:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291 Status:paused}
	I0813 21:12:58.699600   14630 cri.go:122] skipping {09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291 paused}: state = "paused", want "running"
	I0813 21:12:58.699608   14630 cri.go:116] container: {ID:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe Status:paused}
	I0813 21:12:58.699613   14630 cri.go:122] skipping {21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe paused}: state = "paused", want "running"
	I0813 21:12:58.699621   14630 cri.go:116] container: {ID:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995 Status:running}
	I0813 21:12:58.699625   14630 cri.go:118] skipping 248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995 - not in ps
	I0813 21:12:58.699630   14630 cri.go:116] container: {ID:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534 Status:running}
	I0813 21:12:58.699635   14630 cri.go:118] skipping 433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534 - not in ps
	I0813 21:12:58.699638   14630 cri.go:116] container: {ID:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71 Status:running}
	I0813 21:12:58.699643   14630 cri.go:116] container: {ID:707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684 Status:running}
	I0813 21:12:58.699651   14630 cri.go:118] skipping 707cc9edb526c03f1f227d2b83c88f5cf2c69b1f6e47a97b7a4be7a535a37684 - not in ps
	I0813 21:12:58.699661   14630 cri.go:116] container: {ID:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054 Status:running}
	I0813 21:12:58.699667   14630 cri.go:118] skipping 769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054 - not in ps
	I0813 21:12:58.699671   14630 cri.go:116] container: {ID:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4 Status:running}
	I0813 21:12:58.699676   14630 cri.go:118] skipping 7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4 - not in ps
	I0813 21:12:58.699679   14630 cri.go:116] container: {ID:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203 Status:running}
	I0813 21:12:58.699685   14630 cri.go:116] container: {ID:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef Status:running}
	I0813 21:12:58.699689   14630 cri.go:118] skipping cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef - not in ps
	I0813 21:12:58.699693   14630 cri.go:116] container: {ID:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669 Status:running}
	I0813 21:12:58.699698   14630 cri.go:116] container: {ID:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78 Status:stopped}
	I0813 21:12:58.699703   14630 cri.go:122] skipping {f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78 stopped}: state = "stopped", want "running"
	I0813 21:12:58.699707   14630 cri.go:116] container: {ID:f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1 Status:running}
	I0813 21:12:58.699712   14630 cri.go:118] skipping f2e7470876d5d92228129a5e90504812846f6f58debda7a95d83c8e6c89c9fe1 - not in ps
	I0813 21:12:58.699716   14630 cri.go:116] container: {ID:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24 Status:running}
	I0813 21:12:58.699720   14630 cri.go:116] container: {ID:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882 Status:running}
	I0813 21:12:58.699726   14630 cri.go:118] skipping f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882 - not in ps
	I0813 21:12:58.699765   14630 ssh_runner.go:149] Run: sudo runc pause 5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71
	I0813 21:12:58.719092   14630 ssh_runner.go:149] Run: sudo runc pause 5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71 81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203
	I0813 21:12:58.740464   14630 out.go:177] 
	W0813 21:12:58.740626   14630 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc pause 5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71 81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:12:58Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc pause 5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71 81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:12:58Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 21:12:58.740641   14630 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 21:12:58.751861   14630 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_2.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_2.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 21:12:58.753809   14630 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p newest-cni-20210813210910-30853 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813210910-30853 -n newest-cni-20210813210910-30853
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813210910-30853 -n newest-cni-20210813210910-30853: exit status 2 (244.514379ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20210813210910-30853 logs -n 25
E0813 21:13:08.232470   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 21:13:09.456750   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
E0813 21:13:16.008481   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.crt: no such file or directory
E0813 21:13:16.013734   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.crt: no such file or directory
E0813 21:13:16.023968   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.crt: no such file or directory
E0813 21:13:16.044184   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.crt: no such file or directory
E0813 21:13:16.084425   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.crt: no such file or directory
E0813 21:13:16.164715   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.crt: no such file or directory
E0813 21:13:16.325098   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.crt: no such file or directory
E0813 21:13:16.645652   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.crt: no such file or directory
E0813 21:13:17.286588   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.crt: no such file or directory
E0813 21:13:18.567619   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.crt: no such file or directory
E0813 21:13:21.127903   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.crt: no such file or directory
E0813 21:13:26.249007   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.crt: no such file or directory
E0813 21:13:28.740086   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205823-30853/client.crt: no such file or directory
E0813 21:13:36.489324   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.crt: no such file or directory
E0813 21:13:37.709191   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p newest-cni-20210813210910-30853 logs -n 25: exit status 110 (40.969536316s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| ssh     | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:02 UTC | Fri, 13 Aug 2021 21:09:02 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205823-30853                       | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:05 UTC | Fri, 13 Aug 2021 21:09:06 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205823-30853                       | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:07 UTC | Fri, 13 Aug 2021 21:09:09 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:09 UTC | Fri, 13 Aug 2021 21:09:10 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:09:10 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:09:11 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:11 UTC | Fri, 13 Aug 2021 21:09:11 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:10:25 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:36 UTC | Fri, 13 Aug 2021 21:10:36 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813210910-30853 --memory=2200           | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:10:38 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:38 UTC | Fri, 13 Aug 2021 21:10:39 UTC |
	|         | newest-cni-20210813210910-30853                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813210102-30853            | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:38 UTC | Fri, 13 Aug 2021 21:10:39 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813210102-30853            | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:40 UTC | Fri, 13 Aug 2021 21:10:41 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:42 UTC | Fri, 13 Aug 2021 21:10:43 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:43 UTC | Fri, 13 Aug 2021 21:10:43 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:32 UTC | Fri, 13 Aug 2021 21:10:58 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:14 UTC | Fri, 13 Aug 2021 21:11:14 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | no-preload-20210813205915-30853                            | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:17 UTC | Fri, 13 Aug 2021 21:11:18 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | no-preload-20210813205915-30853                            | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:19 UTC | Fri, 13 Aug 2021 21:11:20 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:21 UTC | Fri, 13 Aug 2021 21:11:22 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:22 UTC | Fri, 13 Aug 2021 21:11:22 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:39 UTC | Fri, 13 Aug 2021 21:11:42 UTC |
	|         | newest-cni-20210813210910-30853                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:42 UTC | Fri, 13 Aug 2021 21:11:42 UTC |
	|         | newest-cni-20210813210910-30853                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813210910-30853 --memory=2200           | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:42 UTC | Fri, 13 Aug 2021 21:12:55 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:56 UTC | Fri, 13 Aug 2021 21:12:56 UTC |
	|         | newest-cni-20210813210910-30853                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:11:42
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:11:42.795927   14367 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:11:42.796007   14367 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:11:42.796011   14367 out.go:311] Setting ErrFile to fd 2...
	I0813 21:11:42.796014   14367 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:11:42.796112   14367 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:11:42.796329   14367 out.go:305] Setting JSON to false
	I0813 21:11:42.831608   14367 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":10465,"bootTime":1628878638,"procs":152,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:11:42.831684   14367 start.go:121] virtualization: kvm guest
	I0813 21:11:42.833942   14367 out.go:177] * [newest-cni-20210813210910-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:11:42.835433   14367 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:11:42.834049   14367 notify.go:169] Checking for updates...
	I0813 21:11:42.836843   14367 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:11:42.838221   14367 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:11:42.839571   14367 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:11:42.839955   14367 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:11:42.840311   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:11:42.840349   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:42.850627   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39103
	I0813 21:11:42.851039   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:42.851531   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:11:42.851553   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:42.851914   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:42.852078   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:42.852243   14367 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:11:42.852539   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:11:42.852586   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:42.862427   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37207
	I0813 21:11:42.862794   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:42.863217   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:11:42.863238   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:42.863560   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:42.863722   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:42.890871   14367 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 21:11:42.890896   14367 start.go:278] selected driver: kvm2
	I0813 21:11:42.890901   14367 start.go:751] validating driver "kvm2" against &{Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[Metr
icsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:11:42.891038   14367 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:11:42.892035   14367 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:11:42.892205   14367 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:11:42.902128   14367 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:11:42.902465   14367 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 21:11:42.902493   14367 cni.go:93] Creating CNI manager for ""
	I0813 21:11:42.902501   14367 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:11:42.902511   14367 start_flags.go:277] config:
	{Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false
default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:11:42.902637   14367 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:11:42.904436   14367 out.go:177] * Starting control plane node newest-cni-20210813210910-30853 in cluster newest-cni-20210813210910-30853
	I0813 21:11:42.904454   14367 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:11:42.904476   14367 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 21:11:42.904493   14367 cache.go:56] Caching tarball of preloaded images
	I0813 21:11:42.904594   14367 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 21:11:42.904611   14367 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0813 21:11:42.904745   14367 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json ...
	I0813 21:11:42.904886   14367 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:11:42.904907   14367 start.go:313] acquiring machines lock for newest-cni-20210813210910-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:11:42.904968   14367 start.go:317] acquired machines lock for "newest-cni-20210813210910-30853" in 47.215µs
	I0813 21:11:42.904982   14367 start.go:93] Skipping create...Using existing machine configuration
	I0813 21:11:42.904989   14367 fix.go:55] fixHost starting: 
	I0813 21:11:42.905255   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:11:42.905284   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:42.914701   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0813 21:11:42.915142   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:42.915577   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:11:42.915601   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:42.915893   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:42.916055   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:42.916192   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:11:42.931954   14367 fix.go:108] recreateIfNeeded on newest-cni-20210813210910-30853: state=Stopped err=<nil>
	I0813 21:11:42.931997   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	W0813 21:11:42.932163   14367 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 21:11:42.934231   14367 out.go:177] * Restarting existing kvm2 VM for "newest-cni-20210813210910-30853" ...
	I0813 21:11:42.934255   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Start
	I0813 21:11:42.934377   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring networks are active...
	I0813 21:11:42.936300   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring network default is active
	I0813 21:11:42.936569   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring network mk-newest-cni-20210813210910-30853 is active
	I0813 21:11:42.936859   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Getting domain xml...
	I0813 21:11:42.938500   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Creating domain...
	I0813 21:11:43.354989   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Waiting to get IP...
	I0813 21:11:43.355874   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:43.356336   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Found IP for machine: 192.168.39.210
	I0813 21:11:43.356359   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Reserving static IP address...
	I0813 21:11:43.356376   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has current primary IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:43.356824   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "newest-cni-20210813210910-30853", mac: "52:54:00:22:60:9f", ip: "192.168.39.210"} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:43.356867   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | skip adding static IP to network mk-newest-cni-20210813210910-30853 - found existing host DHCP lease matching {name: "newest-cni-20210813210910-30853", mac: "52:54:00:22:60:9f", ip: "192.168.39.210"}
	I0813 21:11:43.356881   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Reserved static IP address: 192.168.39.210
	I0813 21:11:43.356900   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Waiting for SSH to be available...
	I0813 21:11:43.356952   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Getting to WaitForSSH function...
	I0813 21:11:43.361283   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:43.361723   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:43.361750   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:43.361884   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using SSH client type: external
	I0813 21:11:43.361912   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa (-rw-------)
	I0813 21:11:43.361950   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:11:43.361964   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | About to run SSH command:
	I0813 21:11:43.361999   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | exit 0
	I0813 21:11:55.526780   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 21:11:55.527178   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetConfigRaw
	I0813 21:11:55.527809   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:11:55.532715   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.533039   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:55.533072   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.533357   14367 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json ...
	I0813 21:11:55.533572   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:55.533782   14367 machine.go:88] provisioning docker machine ...
	I0813 21:11:55.533807   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:55.533995   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:11:55.534156   14367 buildroot.go:166] provisioning hostname "newest-cni-20210813210910-30853"
	I0813 21:11:55.534181   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:11:55.534310   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:55.538532   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.538833   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:55.538884   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.538981   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:55.539111   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:55.539255   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:55.539365   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:55.539527   14367 main.go:130] libmachine: Using SSH client type: native
	I0813 21:11:55.539747   14367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:11:55.539769   14367 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210813210910-30853 && echo "newest-cni-20210813210910-30853" | sudo tee /etc/hostname
	I0813 21:11:55.703412   14367 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210813210910-30853
	
	I0813 21:11:55.703444   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:55.708657   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.708940   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:55.708973   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.709072   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:55.709238   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:55.709378   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:55.709487   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:55.709631   14367 main.go:130] libmachine: Using SSH client type: native
	I0813 21:11:55.709797   14367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:11:55.709817   14367 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210813210910-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210813210910-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210813210910-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:11:55.868176   14367 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:11:55.868212   14367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:11:55.868238   14367 buildroot.go:174] setting up certificates
	I0813 21:11:55.868253   14367 provision.go:83] configureAuth start
	I0813 21:11:55.868267   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:11:55.868549   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:11:55.873683   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.874036   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:55.874076   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.874134   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:55.878497   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.878811   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:55.878838   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.878968   14367 provision.go:138] copyHostCerts
	I0813 21:11:55.879035   14367 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:11:55.879046   14367 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:11:55.879102   14367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:11:55.879218   14367 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:11:55.879233   14367 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:11:55.879257   14367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:11:55.879310   14367 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:11:55.879317   14367 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:11:55.879335   14367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 21:11:55.879375   14367 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210813210910-30853 san=[192.168.39.210 192.168.39.210 localhost 127.0.0.1 minikube newest-cni-20210813210910-30853]
	I0813 21:11:55.964045   14367 provision.go:172] copyRemoteCerts
	I0813 21:11:55.964097   14367 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:11:55.964133   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:55.968772   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.969026   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:55.969055   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.969181   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:55.969305   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:55.969455   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:55.969568   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:11:56.057985   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:11:56.073617   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 21:11:56.089176   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:11:56.105376   14367 provision.go:86] duration metric: configureAuth took 237.110908ms
	I0813 21:11:56.105403   14367 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:11:56.105565   14367 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:11:56.105657   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:56.110786   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.111110   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:56.111138   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.111272   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:56.111428   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:56.111608   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:56.111776   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:56.111944   14367 main.go:130] libmachine: Using SSH client type: native
	I0813 21:11:56.112123   14367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:11:56.112140   14367 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 21:11:56.692599   14367 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 21:11:56.692630   14367 machine.go:91] provisioned docker machine in 1.15883504s
	I0813 21:11:56.692649   14367 start.go:267] post-start starting for "newest-cni-20210813210910-30853" (driver="kvm2")
	I0813 21:11:56.692658   14367 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:11:56.692680   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:56.692996   14367 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:11:56.693027   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:56.698055   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.698335   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:56.698361   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.698462   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:56.698675   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:56.698881   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:56.699034   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:11:56.786765   14367 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:11:56.791302   14367 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:11:56.791324   14367 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:11:56.791377   14367 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:11:56.791562   14367 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 21:11:56.791658   14367 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:11:56.798543   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:11:56.814799   14367 start.go:270] post-start completed in 122.13662ms
	I0813 21:11:56.814834   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:56.815087   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:56.820332   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.820647   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:56.820675   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.820808   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:56.820980   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:56.821166   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:56.821302   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:56.821487   14367 main.go:130] libmachine: Using SSH client type: native
	I0813 21:11:56.821671   14367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:11:56.821684   14367 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:11:56.943374   14367 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628889116.888575334
	
	I0813 21:11:56.943398   14367 fix.go:212] guest clock: 1628889116.888575334
	I0813 21:11:56.943406   14367 fix.go:225] Guest: 2021-08-13 21:11:56.888575334 +0000 UTC Remote: 2021-08-13 21:11:56.815068517 +0000 UTC m=+14.062769203 (delta=73.506817ms)
	I0813 21:11:56.943465   14367 fix.go:196] guest clock delta is within tolerance: 73.506817ms
	I0813 21:11:56.943472   14367 fix.go:57] fixHost completed within 14.038482603s
	I0813 21:11:56.943479   14367 start.go:80] releasing machines lock for "newest-cni-20210813210910-30853", held for 14.038502672s
	I0813 21:11:56.943518   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:56.943777   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:11:56.948878   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.949105   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:56.949141   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.949294   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:56.949480   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:56.949924   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:56.950226   14367 ssh_runner.go:149] Run: systemctl --version
	I0813 21:11:56.950256   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:56.950288   14367 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:11:56.950336   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:56.954753   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.955114   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:56.955144   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.955201   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:56.955377   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:56.955553   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:56.955683   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:11:56.955957   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.956306   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:56.956339   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.956470   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:56.956615   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:56.956754   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:56.956887   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:11:57.049083   14367 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:11:57.049188   14367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:12:01.075721   14367 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.026505189s)
	I0813 21:12:01.075899   14367 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 21:12:01.075949   14367 ssh_runner.go:149] Run: which lz4
	I0813 21:12:01.080367   14367 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:12:01.084549   14367 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:12:01.084574   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (590981257 bytes)
	I0813 21:12:03.736918   14367 crio.go:362] Took 2.656574 seconds to copy over tarball
	I0813 21:12:03.736981   14367 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:12:08.978441   14367 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.241433608s)
	I0813 21:12:08.978475   14367 crio.go:369] Took 5.241529 seconds t extract the tarball
	I0813 21:12:08.978487   14367 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:12:09.018090   14367 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 21:12:09.030205   14367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 21:12:09.040861   14367 docker.go:153] disabling docker service ...
	I0813 21:12:09.040916   14367 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:12:09.052115   14367 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:12:09.061719   14367 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:12:09.206812   14367 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:12:09.326373   14367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:12:09.337093   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:12:09.349902   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 21:12:09.357809   14367 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:12:09.364087   14367 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:12:09.364137   14367 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:12:09.377607   14367 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:12:09.384218   14367 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:12:09.506260   14367 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 21:12:09.784512   14367 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 21:12:09.784591   14367 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 21:12:09.791238   14367 start.go:413] Will wait 60s for crictl version
	I0813 21:12:09.791288   14367 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:12:09.821889   14367 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 21:12:09.821953   14367 ssh_runner.go:149] Run: crio --version
	I0813 21:12:09.891924   14367 ssh_runner.go:149] Run: crio --version
	I0813 21:12:11.735003   14367 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.2 ...
	I0813 21:12:11.735058   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:12:11.740625   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:11.741006   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:12:11.741030   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:11.741248   14367 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 21:12:11.746768   14367 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:12:13.851300   14367 out.go:177]   - kubelet.network-plugin=cni
	I0813 21:12:13.853346   14367 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0813 21:12:13.853431   14367 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:12:13.853508   14367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:12:13.897566   14367 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:12:13.897588   14367 crio.go:333] Images already preloaded, skipping extraction
	I0813 21:12:13.897634   14367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:12:13.927806   14367 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:12:13.927832   14367 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:12:13.927899   14367 ssh_runner.go:149] Run: crio config
	I0813 21:12:14.192898   14367 cni.go:93] Creating CNI manager for ""
	I0813 21:12:14.192926   14367 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:12:14.192939   14367 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0813 21:12:14.192958   14367 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.39.210 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210813210910-30853 NodeName:newest-cni-20210813210910-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-el
ect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.210 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:12:14.193109   14367 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "newest-cni-20210813210910-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:12:14.193215   14367 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210813210910-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.210 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:12:14.193279   14367 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 21:12:14.201392   14367 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:12:14.201465   14367 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:12:14.208876   14367 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (554 bytes)
	I0813 21:12:14.219827   14367 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 21:12:14.230627   14367 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I0813 21:12:14.242277   14367 ssh_runner.go:149] Run: grep 192.168.39.210	control-plane.minikube.internal$ /etc/hosts
	I0813 21:12:14.245984   14367 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:12:14.256040   14367 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853 for IP: 192.168.39.210
	I0813 21:12:14.256095   14367 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:12:14.256115   14367 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:12:14.256166   14367 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/client.key
	I0813 21:12:14.256189   14367 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a
	I0813 21:12:14.256210   14367 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key
	I0813 21:12:14.256310   14367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 21:12:14.256353   14367 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 21:12:14.256370   14367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:12:14.256397   14367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:12:14.256422   14367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:12:14.256450   14367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 21:12:14.256497   14367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:12:14.257477   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:12:14.273859   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:12:14.290500   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:12:14.306545   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 21:12:14.322565   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:12:14.338996   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 21:12:14.354718   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:12:14.370471   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 21:12:14.386542   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 21:12:14.402369   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 21:12:14.418038   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:12:14.434102   14367 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:12:14.445953   14367 ssh_runner.go:149] Run: openssl version
	I0813 21:12:14.451937   14367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 21:12:14.459153   14367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 21:12:14.463692   14367 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 21:12:14.463732   14367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 21:12:14.469219   14367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 21:12:14.476824   14367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 21:12:14.484315   14367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 21:12:14.488839   14367 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 21:12:14.488880   14367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 21:12:14.494335   14367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:12:14.501820   14367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:12:14.509124   14367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:12:14.513481   14367 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:12:14.513509   14367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:12:14.518990   14367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:12:14.526670   14367 kubeadm.go:390] StartCluster: {Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.
0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.dom
ain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:12:14.526755   14367 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 21:12:14.526785   14367 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:12:14.558524   14367 cri.go:76] found id: ""
	I0813 21:12:14.558576   14367 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:12:14.566751   14367 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:12:14.566777   14367 kubeadm.go:600] restartCluster start
	I0813 21:12:14.566871   14367 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:12:14.573499   14367 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:14.574151   14367 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210813210910-30853" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:12:14.574242   14367 kubeconfig.go:128] "newest-cni-20210813210910-30853" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:12:14.574539   14367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:14.576626   14367 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:12:14.582645   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:14.582686   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:14.591111   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:14.791510   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:14.791586   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:14.801142   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:14.991370   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:14.991447   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:15.000584   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:15.191896   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:15.191970   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:15.201129   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:15.391496   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:15.391569   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:15.401482   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:15.591854   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:15.591936   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:15.600994   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:15.791233   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:15.791296   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:15.800423   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:15.991725   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:15.991807   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:16.000869   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:16.192288   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:16.192396   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:16.201776   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:16.392113   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:16.392184   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:16.401017   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:16.591247   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:16.591333   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:16.600685   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:16.791939   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:16.792040   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:16.801253   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:16.991530   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:16.991617   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:17.000621   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:17.191933   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:17.192020   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:17.201085   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:17.391391   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:17.391478   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:17.400828   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:17.592213   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:17.592318   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:17.601566   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:17.601584   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:17.601629   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:17.609718   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:17.609734   14367 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:12:17.609742   14367 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:12:17.609756   14367 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:12:17.609808   14367 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:12:17.643300   14367 cri.go:76] found id: ""
	I0813 21:12:17.643371   14367 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:12:17.657320   14367 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:12:17.665940   14367 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:12:17.665987   14367 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:12:17.672626   14367 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:12:17.672644   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:12:17.812928   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:12:19.022414   14367 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.209447891s)
	I0813 21:12:19.022450   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:12:19.276932   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:12:19.417749   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:12:19.520816   14367 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:12:19.520881   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:20.034374   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:20.534125   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:21.033925   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:21.534509   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:22.033961   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:22.534630   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:23.034407   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:23.534686   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:24.034496   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:24.533962   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:25.033759   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:25.533782   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:26.033867   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:26.534648   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:27.034478   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:27.534627   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:28.034344   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:28.046824   14367 api_server.go:70] duration metric: took 8.526007952s to wait for apiserver process to appear ...
	I0813 21:12:28.046849   14367 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:12:28.046873   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:33.047532   14367 api_server.go:255] stopped: https://192.168.39.210:8443/healthz: Get "https://192.168.39.210:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:12:33.548327   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:38.549533   14367 api_server.go:255] stopped: https://192.168.39.210:8443/healthz: Get "https://192.168.39.210:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:12:39.048081   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:44.049250   14367 api_server.go:255] stopped: https://192.168.39.210:8443/healthz: Get "https://192.168.39.210:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:12:44.547817   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:49.073312   14367 api_server.go:265] https://192.168.39.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:12:49.073339   14367 api_server.go:101] status: https://192.168.39.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:12:49.547901   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:49.557738   14367 api_server.go:265] https://192.168.39.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:12:49.557767   14367 api_server.go:101] status: https://192.168.39.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:12:50.048422   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:50.078759   14367 api_server.go:265] https://192.168.39.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:12:50.078800   14367 api_server.go:101] status: https://192.168.39.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:12:50.548386   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:50.555157   14367 api_server.go:265] https://192.168.39.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:12:50.555185   14367 api_server.go:101] status: https://192.168.39.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:12:51.047708   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:51.064435   14367 api_server.go:265] https://192.168.39.210:8443/healthz returned 200:
	ok
	I0813 21:12:51.088794   14367 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:12:51.088819   14367 api_server.go:129] duration metric: took 23.041952464s to wait for apiserver health ...
	I0813 21:12:51.088830   14367 cni.go:93] Creating CNI manager for ""
	I0813 21:12:51.088848   14367 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:12:51.090600   14367 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:12:51.090659   14367 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:12:51.114542   14367 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:12:51.163278   14367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:12:51.180593   14367 system_pods.go:59] 9 kube-system pods found
	I0813 21:12:51.180640   14367 system_pods.go:61] "coredns-78fcd69978-42frp" [ffc12ff0-fe4e-422b-ae81-83f17416e379] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:12:51.180647   14367 system_pods.go:61] "coredns-78fcd69978-bc587" [0d2dab50-994b-4314-8922-0e8a913a9b26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:12:51.180654   14367 system_pods.go:61] "etcd-newest-cni-20210813210910-30853" [a6811fb7-a94c-45db-91d0-34c033aa1eab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0813 21:12:51.180659   14367 system_pods.go:61] "kube-apiserver-newest-cni-20210813210910-30853" [bdcdda0b-8c06-4c71-8f0a-66d55d331267] Running
	I0813 21:12:51.180665   14367 system_pods.go:61] "kube-controller-manager-newest-cni-20210813210910-30853" [374fba93-8efe-439f-8aec-50ae02d227e3] Running
	I0813 21:12:51.180672   14367 system_pods.go:61] "kube-proxy-qt9ld" [4e36061f-0559-4cde-9b0a-b5cb328d0d76] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 21:12:51.180679   14367 system_pods.go:61] "kube-scheduler-newest-cni-20210813210910-30853" [bdf4950a-8d5e-434c-8c99-20e475c71f65] Running
	I0813 21:12:51.180683   14367 system_pods.go:61] "metrics-server-7c784ccb57-mrklk" [ad347f93-2bcc-4e1c-b82c-66f4854c46d2] Pending
	I0813 21:12:51.180688   14367 system_pods.go:61] "storage-provisioner" [5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 21:12:51.180695   14367 system_pods.go:74] duration metric: took 17.398617ms to wait for pod list to return data ...
	I0813 21:12:51.180702   14367 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:12:51.195318   14367 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:12:51.195350   14367 node_conditions.go:123] node cpu capacity is 2
	I0813 21:12:51.195364   14367 node_conditions.go:105] duration metric: took 14.656302ms to run NodePressure ...
	I0813 21:12:51.195384   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:12:52.312553   14367 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.117144753s)
	I0813 21:12:52.312593   14367 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:12:52.356977   14367 ops.go:34] apiserver oom_adj: -16
	I0813 21:12:52.357001   14367 kubeadm.go:604] restartCluster took 37.790217793s
	I0813 21:12:52.357011   14367 kubeadm.go:392] StartCluster complete in 37.83034654s
	I0813 21:12:52.357032   14367 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:52.357142   14367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:12:52.357747   14367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:52.364948   14367 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210813210910-30853" rescaled to 1
	I0813 21:12:52.365013   14367 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:12:52.365042   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:12:52.365064   14367 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:12:52.366957   14367 out.go:177] * Verifying Kubernetes components...
	I0813 21:12:52.365142   14367 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210813210910-30853"
	I0813 21:12:52.367043   14367 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210813210910-30853"
	I0813 21:12:52.367055   14367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:12:52.365153   14367 addons.go:59] Setting dashboard=true in profile "newest-cni-20210813210910-30853"
	I0813 21:12:52.367068   14367 addons.go:135] Setting addon dashboard=true in "newest-cni-20210813210910-30853"
	W0813 21:12:52.367075   14367 addons.go:147] addon dashboard should already be in state true
	I0813 21:12:52.367104   14367 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	W0813 21:12:52.367057   14367 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:12:52.367143   14367 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:12:52.365163   14367 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210813210910-30853"
	I0813 21:12:52.367189   14367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210813210910-30853"
	I0813 21:12:52.365171   14367 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210813210910-30853"
	I0813 21:12:52.367242   14367 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210813210910-30853"
	W0813 21:12:52.367259   14367 addons.go:147] addon metrics-server should already be in state true
	I0813 21:12:52.367313   14367 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:12:52.365246   14367 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:12:52.367562   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.367590   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.367602   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.367602   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.367631   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.367706   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.367786   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.367825   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.379237   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34693
	I0813 21:12:52.379715   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.381362   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.381386   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.381428   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44281
	I0813 21:12:52.381814   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.381961   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.382458   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.382494   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.382686   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37091
	I0813 21:12:52.382786   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.382804   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.382823   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45369
	I0813 21:12:52.383120   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.383163   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.383200   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.383539   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.383556   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.383668   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.383690   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.383729   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.383768   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.383943   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.384021   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.384516   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.384555   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.384664   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:12:52.394410   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39517
	I0813 21:12:52.395550   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.396103   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.396129   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.396535   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.396744   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:12:52.400277   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:12:52.402440   14367 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:12:52.402559   14367 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:12:52.402579   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:12:52.402600   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:12:52.405151   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43155
	I0813 21:12:52.405530   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.405998   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.406023   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.406433   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.406615   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:12:52.407218   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0813 21:12:52.407647   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.408083   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.408109   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.408439   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.408620   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:12:52.409099   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.409688   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:12:52.409714   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.409839   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:12:52.410000   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:12:52.410152   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:12:52.410160   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:12:52.410298   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:12:52.412246   14367 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:12:52.411625   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:12:52.413764   14367 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:12:52.413833   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:12:52.415179   14367 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:12:52.415234   14367 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:12:52.415246   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:12:52.413846   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:12:52.415272   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:12:52.415281   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:12:52.420440   14367 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210813210910-30853"
	W0813 21:12:52.420462   14367 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:12:52.420493   14367 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:12:52.420827   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.420870   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.421205   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.421629   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:12:52.421659   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.421870   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:12:52.422060   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:12:52.422186   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:12:52.422286   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.422314   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:12:52.422724   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:12:52.422755   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.422901   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:12:52.423036   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:12:52.423159   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:12:52.423286   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:12:52.431929   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40947
	I0813 21:12:52.432276   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.432667   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.432688   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.432966   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.433564   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.433609   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.470073   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43275
	I0813 21:12:52.470530   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.471088   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.471110   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.471445   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.471627   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:12:52.474940   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:12:52.475188   14367 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:12:52.475204   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:12:52.475220   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:12:52.480469   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.480823   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:12:52.480853   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.480971   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:12:52.481124   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:12:52.481293   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:12:52.481434   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:12:52.601256   14367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:12:52.683739   14367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:12:52.683768   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:12:52.703159   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:12:52.703192   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:12:52.748843   14367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:12:52.831110   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:12:52.831139   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:12:52.835435   14367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:12:52.835459   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:12:53.104100   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:12:53.104124   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:12:53.157894   14367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:12:53.157919   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:12:53.204915   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:12:53.204946   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:12:53.222988   14367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:12:53.260055   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:12:53.260085   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:12:53.493428   14367 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.128352719s)
	I0813 21:12:53.493517   14367 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (1.126438109s)
	I0813 21:12:53.493561   14367 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:12:53.493614   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:53.493524   14367 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 21:12:53.603564   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:12:53.603589   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:12:53.956970   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:12:53.956999   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:12:54.139515   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:12:54.139539   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:12:54.251231   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:12:54.251259   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:12:54.508265   14367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:12:54.552926   14367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.951611096s)
	I0813 21:12:54.552988   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:54.553025   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:54.553309   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:12:54.553326   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:54.553342   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:54.553360   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:54.553373   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:54.553589   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:54.553603   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:54.553629   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:12:54.926950   14367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.178060823s)
	I0813 21:12:54.927005   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:54.927018   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:54.927304   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:54.927374   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:12:54.927395   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:54.927408   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:54.927420   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:54.927628   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:54.927647   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:54.927650   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:12:54.927670   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:54.927686   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:54.927912   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:54.927923   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:55.173496   14367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.950447285s)
	I0813 21:12:55.173554   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:55.173571   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:55.173579   14367 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.679944797s)
	I0813 21:12:55.173598   14367 api_server.go:70] duration metric: took 2.808558842s to wait for apiserver process to appear ...
	I0813 21:12:55.173604   14367 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:12:55.173613   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:55.173905   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:12:55.173919   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:55.173936   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:55.173957   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:55.173972   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:55.174167   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:55.174201   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:55.174217   14367 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210813210910-30853"
	I0813 21:12:55.181805   14367 api_server.go:265] https://192.168.39.210:8443/healthz returned 200:
	ok
	I0813 21:12:55.183106   14367 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:12:55.183121   14367 api_server.go:129] duration metric: took 9.513019ms to wait for apiserver health ...
	I0813 21:12:55.183129   14367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:12:55.194552   14367 system_pods.go:59] 8 kube-system pods found
	I0813 21:12:55.194577   14367 system_pods.go:61] "coredns-78fcd69978-bc587" [0d2dab50-994b-4314-8922-0e8a913a9b26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:12:55.194582   14367 system_pods.go:61] "etcd-newest-cni-20210813210910-30853" [a6811fb7-a94c-45db-91d0-34c033aa1eab] Running
	I0813 21:12:55.194587   14367 system_pods.go:61] "kube-apiserver-newest-cni-20210813210910-30853" [bdcdda0b-8c06-4c71-8f0a-66d55d331267] Running
	I0813 21:12:55.194595   14367 system_pods.go:61] "kube-controller-manager-newest-cni-20210813210910-30853" [374fba93-8efe-439f-8aec-50ae02d227e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 21:12:55.194604   14367 system_pods.go:61] "kube-proxy-qt9ld" [4e36061f-0559-4cde-9b0a-b5cb328d0d76] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 21:12:55.194612   14367 system_pods.go:61] "kube-scheduler-newest-cni-20210813210910-30853" [bdf4950a-8d5e-434c-8c99-20e475c71f65] Running
	I0813 21:12:55.194623   14367 system_pods.go:61] "metrics-server-7c784ccb57-mrklk" [ad347f93-2bcc-4e1c-b82c-66f4854c46d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:55.194631   14367 system_pods.go:61] "storage-provisioner" [5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 21:12:55.194645   14367 system_pods.go:74] duration metric: took 11.511833ms to wait for pod list to return data ...
	I0813 21:12:55.194653   14367 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:12:55.197911   14367 default_sa.go:45] found service account: "default"
	I0813 21:12:55.197931   14367 default_sa.go:55] duration metric: took 3.2722ms for default service account to be created ...
	I0813 21:12:55.197940   14367 kubeadm.go:547] duration metric: took 2.832901179s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0813 21:12:55.197966   14367 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:12:55.201445   14367 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:12:55.201468   14367 node_conditions.go:123] node cpu capacity is 2
	I0813 21:12:55.201482   14367 node_conditions.go:105] duration metric: took 3.51037ms to run NodePressure ...
	I0813 21:12:55.201491   14367 start.go:231] waiting for startup goroutines ...
	I0813 21:12:55.694029   14367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.185700485s)
	I0813 21:12:55.694135   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:55.694155   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:55.694470   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:12:55.694528   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:55.694553   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:55.694564   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:55.694577   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:55.694846   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:55.694875   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:55.694906   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:12:55.696782   14367 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 21:12:55.696806   14367 addons.go:344] enableAddons completed in 3.331747172s
	I0813 21:12:55.741873   14367 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 21:12:55.743280   14367 out.go:177] 
	W0813 21:12:55.743425   14367 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 21:12:55.744924   14367 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:12:55.746315   14367 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813210910-30853" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 21:11:53 UTC, end at Fri 2021-08-13 21:12:59 UTC. --
	Aug 13 21:12:58 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:58.652366080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="go-grpc-middleware/chain.go:25" id=c57fc8cf-fd25-4c82-ae70-d5052ce47a25 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.255387727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1b1018fd-26ef-4afd-98d0-5ffdccf4bbae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.255530952Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1b1018fd-26ef-4afd-98d0-5ffdccf4bbae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.255784769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1b1018fd-26ef-4afd-98d0-5ffdccf4bbae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.292525310Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f7e61196-4517-4cdf-a03f-7e313af75900 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.292736726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f7e61196-4517-4cdf-a03f-7e313af75900 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.292900002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f7e61196-4517-4cdf-a03f-7e313af75900 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.328990549Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c573e24e-88ad-40e0-81a1-83f614e78250 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.329128407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c573e24e-88ad-40e0-81a1-83f614e78250 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.329284953Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c573e24e-88ad-40e0-81a1-83f614e78250 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.361332100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cb10397b-eb37-4776-8df0-3504ad08fc6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.361471130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cb10397b-eb37-4776-8df0-3504ad08fc6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.361702520Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cb10397b-eb37-4776-8df0-3504ad08fc6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.398820568Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f379509f-fa6b-4cbc-b641-0ec9a770bc79 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.398958766Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f379509f-fa6b-4cbc-b641-0ec9a770bc79 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.399167526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f379509f-fa6b-4cbc-b641-0ec9a770bc79 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.434987781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3063b4b4-5150-43b0-83c2-43304c413d4e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.435140958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3063b4b4-5150-43b0-83c2-43304c413d4e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.435300072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3063b4b4-5150-43b0-83c2-43304c413d4e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.467954811Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6a975d4b-677a-49d4-a4f3-c3803e9f7fda name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.468087904Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6a975d4b-677a-49d4-a4f3-c3803e9f7fda name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.468254291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6a975d4b-677a-49d4-a4f3-c3803e9f7fda name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.505440185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=46e75ad0-84ed-4007-acfb-d108564c112b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.505570670Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=46e75ad0-84ed-4007-acfb-d108564c112b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:12:59 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:12:59.505809072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=46e75ad0-84ed-4007-acfb-d108564c112b name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                     CREATED             STATE               NAME                      ATTEMPT             POD ID
	e2863ab689591       ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c                          5 seconds ago       Running             kube-proxy                1                   7d4ecadfd7f19
	21ee344d5f9ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                          5 seconds ago       Running             storage-provisioner       0                   cd3ff57b787d3
	f39aba8b3d625       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c                          9 seconds ago       Running             kube-controller-manager   2                   769393d983373
	81f490d516432       k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d   21 seconds ago      Running             etcd                      1                   248f7b6f7fd02
	5bbe5f8c98c37       7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75                          31 seconds ago      Running             kube-scheduler            1                   f547e1140c140
	f0de6c0b2f66a       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c                          32 seconds ago      Exited              kube-controller-manager   1                   769393d983373
	09c7d19e2c150       b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a                          32 seconds ago      Running             kube-apiserver            1                   433cba576a12a
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Aug13 21:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.091915] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.801628] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000020] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.309380] systemd-fstab-generator[1162]: Ignoring "noauto" for root device
	[  +0.037434] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.012704] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1720 comm=systemd-network
	[  +0.553475] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.340981] vboxguest: loading out-of-tree module taints kernel.
	[  +0.005458] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug13 21:12] systemd-fstab-generator[2140]: Ignoring "noauto" for root device
	[  +0.129930] systemd-fstab-generator[2153]: Ignoring "noauto" for root device
	[  +0.170141] systemd-fstab-generator[2180]: Ignoring "noauto" for root device
	[  +9.751619] systemd-fstab-generator[2372]: Ignoring "noauto" for root device
	[ +31.719822] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.007272] kauditd_printk_skb: 107 callbacks suppressed
	[  +0.721371] systemd-fstab-generator[3536]: Ignoring "noauto" for root device
	[  +0.824042] systemd-fstab-generator[3590]: Ignoring "noauto" for root device
	[  +0.942121] systemd-fstab-generator[3645]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203] <==
	* {"level":"info","ts":"2021-08-13T21:12:38.772Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-08-13T21:12:38.775Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"5a5dd032def1271d","local-server-version":"3.5.0","cluster-id":"989b3f6bb1f1f8ce","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T21:12:38.779Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-13T21:12:38.779Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"5a5dd032def1271d","initial-advertise-peer-urls":["https://192.168.39.210:2380"],"listen-peer-urls":["https://192.168.39.210:2380"],"advertise-client-urls":["https://192.168.39.210:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.210:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-13T21:12:38.779Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-13T21:12:38.779Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"5a5dd032def1271d","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-08-13T21:12:38.780Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"info","ts":"2021-08-13T21:12:38.780Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.39.210:2380"}
	{"level":"info","ts":"2021-08-13T21:12:38.780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d switched to configuration voters=(6511589553154893597)"}
	{"level":"info","ts":"2021-08-13T21:12:38.780Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"989b3f6bb1f1f8ce","local-member-id":"5a5dd032def1271d","added-peer-id":"5a5dd032def1271d","added-peer-peer-urls":["https://192.168.39.210:2380"]}
	{"level":"info","ts":"2021-08-13T21:12:38.781Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"989b3f6bb1f1f8ce","local-member-id":"5a5dd032def1271d","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-08-13T21:12:39.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d is starting a new election at term 2"}
	{"level":"info","ts":"2021-08-13T21:12:39.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d became pre-candidate at term 2"}
	{"level":"info","ts":"2021-08-13T21:12:39.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d received MsgPreVoteResp from 5a5dd032def1271d at term 2"}
	{"level":"info","ts":"2021-08-13T21:12:39.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d became candidate at term 3"}
	{"level":"info","ts":"2021-08-13T21:12:39.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d received MsgVoteResp from 5a5dd032def1271d at term 3"}
	{"level":"info","ts":"2021-08-13T21:12:39.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d became leader at term 3"}
	{"level":"info","ts":"2021-08-13T21:12:39.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5a5dd032def1271d elected leader 5a5dd032def1271d at term 3"}
	{"level":"info","ts":"2021-08-13T21:12:39.458Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"5a5dd032def1271d","local-member-attributes":"{Name:newest-cni-20210813210910-30853 ClientURLs:[https://192.168.39.210:2379]}","request-path":"/0/members/5a5dd032def1271d/attributes","cluster-id":"989b3f6bb1f1f8ce","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-13T21:12:39.458Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T21:12:39.459Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T21:12:39.461Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.210:2379"}
	{"level":"info","ts":"2021-08-13T21:12:39.461Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-13T21:12:39.461Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-13T21:12:39.463Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  21:13:39 up 1 min,  0 users,  load average: 1.04, 0.39, 0.14
	Linux newest-cni-20210813210910-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291] <==
	* I0813 21:12:49.115844       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0813 21:12:49.115949       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0813 21:12:49.115994       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0813 21:12:49.128554       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0813 21:12:49.128838       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0813 21:12:49.229982       1 cache.go:39] Caches are synced for autoregister controller
	I0813 21:12:49.230133       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0813 21:12:49.230969       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0813 21:12:49.231091       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 21:12:49.232030       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 21:12:49.272786       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0813 21:12:49.305976       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 21:12:50.004094       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 21:12:50.137805       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 21:12:50.139995       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	W0813 21:12:51.138280       1 handler_proxy.go:104] no RequestInfo found in the context
	E0813 21:12:51.138441       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 21:12:51.138596       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 21:12:51.812177       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 21:12:51.876910       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 21:12:52.193096       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 21:12:52.261543       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 21:12:52.281306       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0813 21:12:55.321243       1 controller.go:611] quota admission added evaluator for: namespaces
	
	* 
	* ==> kube-controller-manager [f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78] <==
	* 	/usr/local/go/src/bytes/buffer.go:204 +0xbe
	crypto/tls.(*Conn).readFromUntil(0xc00036aa80, 0x5176a20, 0xc00093c070, 0x5, 0xc00093c070, 0x99)
		/usr/local/go/src/crypto/tls/conn.go:798 +0xf3
	crypto/tls.(*Conn).readRecordOrCCS(0xc00036aa80, 0x0, 0x0, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:605 +0x115
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:573
	crypto/tls.(*Conn).Read(0xc00036aa80, 0xc000a53000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:1276 +0x165
	bufio.(*Reader).Read(0xc0006213e0, 0xc0001ad6f8, 0x9, 0x9, 0x99f9cb, 0xc000914c78, 0x4071a5)
		/usr/local/go/src/bufio/bufio.go:227 +0x222
	io.ReadAtLeast(0x516f360, 0xc0006213e0, 0xc0001ad6f8, 0x9, 0x9, 0x9, 0xc000a2f5e0, 0x72199d9e98c000, 0xc000a2f5e0)
		/usr/local/go/src/io/io.go:328 +0x87
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:347
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc0001ad6f8, 0x9, 0x9, 0x516f360, 0xc0006213e0, 0x0, 0x0, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0001ad6c0, 0xc000a31710, 0x0, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000914fa8, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1821 +0xd8
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc000120d80)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1743 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:695 +0x6c5
	
	* 
	* ==> kube-controller-manager [f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24] <==
	* I0813 21:12:53.851336       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates
	I0813 21:12:53.851403       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io
	I0813 21:12:53.851584       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts
	I0813 21:12:53.851612       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges
	I0813 21:12:53.851798       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
	I0813 21:12:53.851847       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
	I0813 21:12:53.851964       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I0813 21:12:53.852007       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps
	I0813 21:12:53.852032       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps
	I0813 21:12:53.852062       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	I0813 21:12:53.852170       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I0813 21:12:53.852241       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints
	I0813 21:12:53.852272       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io
	I0813 21:12:53.852381       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch
	I0813 21:12:53.852412       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch
	I0813 21:12:53.852449       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps
	I0813 21:12:53.852562       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps
	I0813 21:12:53.852601       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
	I0813 21:12:53.856127       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
	I0813 21:12:53.856548       1 controllermanager.go:577] Started "resourcequota"
	I0813 21:12:53.857282       1 resource_quota_controller.go:273] Starting resource quota controller
	I0813 21:12:53.857298       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0813 21:12:53.857341       1 resource_quota_monitor.go:304] QuotaMonitor running
	I0813 21:12:53.912143       1 node_ipam_controller.go:91] Sending events to api server.
	E0813 21:12:53.916542       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669] <==
	* I0813 21:12:54.596423       1 node.go:172] Successfully retrieved node IP: 192.168.39.210
	I0813 21:12:54.596770       1 server_others.go:140] Detected node IP 192.168.39.210
	W0813 21:12:54.596872       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	W0813 21:12:54.701969       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 21:12:54.702176       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 21:12:54.702190       1 server_others.go:212] Using iptables Proxier.
	I0813 21:12:54.703284       1 server.go:649] Version: v1.22.0-rc.0
	I0813 21:12:54.705773       1 config.go:315] Starting service config controller
	I0813 21:12:54.705886       1 config.go:224] Starting endpoint slice config controller
	I0813 21:12:54.706015       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 21:12:54.706015       1 shared_informer.go:240] Waiting for caches to sync for service config
	E0813 21:12:54.724469       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210813210910-30853.169afa12fcd1ead9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd5f5aa120176, ext:375057157, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210813210910-30853", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Nam
e:"newest-cni-20210813210910-30853", UID:"newest-cni-20210813210910-30853", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210813210910-30853.169afa12fcd1ead9" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 21:12:54.806851       1 shared_informer.go:247] Caches are synced for service config 
	I0813 21:12:54.806879       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71] <==
	* W0813 21:12:28.570166       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0813 21:12:29.072718       1 serving.go:347] Generated self-signed cert in-memory
	W0813 21:12:39.589534       1 authentication.go:345] Error looking up in-cluster authentication configuration: Get "https://192.168.39.210:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0813 21:12:39.589582       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 21:12:39.589595       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 21:12:49.076784       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0813 21:12:49.077326       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0813 21:12:49.077160       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0813 21:12:49.091844       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0813 21:12:49.184178       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:12:49.187385       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:12:49.189109       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:12:49.189179       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:12:49.189241       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:49.189380       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:12:49.189475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:12:49.189540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:49.189608       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:12:49.189783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:12:49.189846       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:12:49.189905       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:49.189965       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:12:49.192188       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0813 21:12:50.092229       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:11:53 UTC, end at Fri 2021-08-13 21:13:39 UTC. --
	Aug 13 21:12:49 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:49.765077    2380 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkstk\" (UniqueName: \"kubernetes.io/projected/4e36061f-0559-4cde-9b0a-b5cb328d0d76-kube-api-access-jkstk\") pod \"kube-proxy-qt9ld\" (UID: \"4e36061f-0559-4cde-9b0a-b5cb328d0d76\") "
	Aug 13 21:12:49 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:49.765408    2380 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e36061f-0559-4cde-9b0a-b5cb328d0d76-kube-proxy\") pod \"kube-proxy-qt9ld\" (UID: \"4e36061f-0559-4cde-9b0a-b5cb328d0d76\") "
	Aug 13 21:12:49 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:49.765615    2380 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e36061f-0559-4cde-9b0a-b5cb328d0d76-lib-modules\") pod \"kube-proxy-qt9ld\" (UID: \"4e36061f-0559-4cde-9b0a-b5cb328d0d76\") "
	Aug 13 21:12:49 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:49.766048    2380 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd5tc\" (UniqueName: \"kubernetes.io/projected/5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6-kube-api-access-pd5tc\") pod \"storage-provisioner\" (UID: \"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6\") "
	Aug 13 21:12:49 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:49.770745    2380 reconciler.go:157] "Reconciler: start to sync state"
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:49.987166    2380 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xhrj\" (UniqueName: \"kubernetes.io/projected/ffc12ff0-fe4e-422b-ae81-83f17416e379-kube-api-access-8xhrj\") pod \"ffc12ff0-fe4e-422b-ae81-83f17416e379\" (UID: \"ffc12ff0-fe4e-422b-ae81-83f17416e379\") "
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:49.987292    2380 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffc12ff0-fe4e-422b-ae81-83f17416e379-config-volume\") pod \"ffc12ff0-fe4e-422b-ae81-83f17416e379\" (UID: \"ffc12ff0-fe4e-422b-ae81-83f17416e379\") "
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: W0813 21:12:49.997946    2380 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/ffc12ff0-fe4e-422b-ae81-83f17416e379/volumes/kubernetes.io~projected/kube-api-access-8xhrj: clearQuota called, but quotas disabled
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:50.000758    2380 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc12ff0-fe4e-422b-ae81-83f17416e379-kube-api-access-8xhrj" (OuterVolumeSpecName: "kube-api-access-8xhrj") pod "ffc12ff0-fe4e-422b-ae81-83f17416e379" (UID: "ffc12ff0-fe4e-422b-ae81-83f17416e379"). InnerVolumeSpecName "kube-api-access-8xhrj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: W0813 21:12:50.004339    2380 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/ffc12ff0-fe4e-422b-ae81-83f17416e379/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:50.011180    2380 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffc12ff0-fe4e-422b-ae81-83f17416e379-config-volume" (OuterVolumeSpecName: "config-volume") pod "ffc12ff0-fe4e-422b-ae81-83f17416e379" (UID: "ffc12ff0-fe4e-422b-ae81-83f17416e379"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:50.088906    2380 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffc12ff0-fe4e-422b-ae81-83f17416e379-config-volume\") on node \"newest-cni-20210813210910-30853\" DevicePath \"\""
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:50.089020    2380 reconciler.go:319] "Volume detached for volume \"kube-api-access-8xhrj\" (UniqueName: \"kubernetes.io/projected/ffc12ff0-fe4e-422b-ae81-83f17416e379-kube-api-access-8xhrj\") on node \"newest-cni-20210813210910-30853\" DevicePath \"\""
	Aug 13 21:12:51 newest-cni-20210813210910-30853 kubelet[2380]: W0813 21:12:51.301805    2380 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e36061f_0559_4cde_9b0a_b5cb328d0d76.slice/crio-7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4.scope": /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e36061f_0559_4cde_9b0a_b5cb328d0d76.slice/crio-7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4.scope/cpuset.cpus found to be empty, continuing to push stats
	Aug 13 21:12:52 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:52.899090    2380 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ffc12ff0-fe4e-422b-ae81-83f17416e379 path="/var/lib/kubelet/pods/ffc12ff0-fe4e-422b-ae81-83f17416e379/volumes"
	Aug 13 21:12:53 newest-cni-20210813210910-30853 kubelet[2380]: E0813 21:12:53.160076    2380 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:12:53 newest-cni-20210813210910-30853 kubelet[2380]: E0813 21:12:53.160115    2380 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:12:53 newest-cni-20210813210910-30853 kubelet[2380]: E0813 21:12:53.165745    2380 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ns9c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-mrklk_kube-system(ad347f93-2bcc-4e1c-b82c-66f4854c46d2): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:12:53 newest-cni-20210813210910-30853 kubelet[2380]: E0813 21:12:53.165812    2380 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-mrklk" podUID=ad347f93-2bcc-4e1c-b82c-66f4854c46d2
	Aug 13 21:12:53 newest-cni-20210813210910-30853 kubelet[2380]: E0813 21:12:53.840167    2380 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-mrklk" podUID=ad347f93-2bcc-4e1c-b82c-66f4854c46d2
	Aug 13 21:12:54 newest-cni-20210813210910-30853 kubelet[2380]: E0813 21:12:54.903723    2380 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e36061f_0559_4cde_9b0a_b5cb328d0d76.slice/crio-7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4.scope\": RecentStats: unable to find data in memory cache]"
	Aug 13 21:12:56 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:56.854241    2380 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 13 21:12:56 newest-cni-20210813210910-30853 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:12:56 newest-cni-20210813210910-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:12:56 newest-cni-20210813210910-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe] <==
	* I0813 21:12:54.045115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 21:13:39.659032   14692 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813210910-30853 -n newest-cni-20210813210910-30853
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813210910-30853 -n newest-cni-20210813210910-30853: exit status 2 (250.356523ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20210813210910-30853 logs -n 25
E0813 21:13:52.658819   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 21:13:55.706565   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:13:56.969554   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210102-30853/client.crt: no such file or directory
E0813 21:14:01.472874   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p newest-cni-20210813210910-30853 logs -n 25: exit status 110 (41.026198836s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| ssh     | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:02 UTC | Fri, 13 Aug 2021 21:09:02 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205823-30853                       | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:05 UTC | Fri, 13 Aug 2021 21:09:06 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205823-30853                       | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:07 UTC | Fri, 13 Aug 2021 21:09:09 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:09 UTC | Fri, 13 Aug 2021 21:09:10 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813205917-30853                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:09:10 UTC |
	|         | embed-certs-20210813205917-30853                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:09:11 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205823-30853            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:11 UTC | Fri, 13 Aug 2021 21:09:11 UTC |
	|         | old-k8s-version-20210813205823-30853                       |                                                 |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:10:25 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:36 UTC | Fri, 13 Aug 2021 21:10:36 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813210910-30853 --memory=2200           | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:09:10 UTC | Fri, 13 Aug 2021 21:10:38 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:38 UTC | Fri, 13 Aug 2021 21:10:39 UTC |
	|         | newest-cni-20210813210910-30853                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813210102-30853            | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:38 UTC | Fri, 13 Aug 2021 21:10:39 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813210102-30853            | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:40 UTC | Fri, 13 Aug 2021 21:10:41 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:42 UTC | Fri, 13 Aug 2021 21:10:43 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813210102-30853 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:43 UTC | Fri, 13 Aug 2021 21:10:43 UTC |
	|         | default-k8s-different-port-20210813210102-30853            |                                                 |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:32 UTC | Fri, 13 Aug 2021 21:10:58 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:14 UTC | Fri, 13 Aug 2021 21:11:14 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | no-preload-20210813205915-30853                            | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:17 UTC | Fri, 13 Aug 2021 21:11:18 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | no-preload-20210813205915-30853                            | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:19 UTC | Fri, 13 Aug 2021 21:11:20 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:21 UTC | Fri, 13 Aug 2021 21:11:22 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210813205915-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:22 UTC | Fri, 13 Aug 2021 21:11:22 UTC |
	|         | no-preload-20210813205915-30853                            |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:10:39 UTC | Fri, 13 Aug 2021 21:11:42 UTC |
	|         | newest-cni-20210813210910-30853                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:42 UTC | Fri, 13 Aug 2021 21:11:42 UTC |
	|         | newest-cni-20210813210910-30853                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813210910-30853 --memory=2200           | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:42 UTC | Fri, 13 Aug 2021 21:12:55 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=crio                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210813210910-30853                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:56 UTC | Fri, 13 Aug 2021 21:12:56 UTC |
	|         | newest-cni-20210813210910-30853                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:11:42
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:11:42.795927   14367 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:11:42.796007   14367 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:11:42.796011   14367 out.go:311] Setting ErrFile to fd 2...
	I0813 21:11:42.796014   14367 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:11:42.796112   14367 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:11:42.796329   14367 out.go:305] Setting JSON to false
	I0813 21:11:42.831608   14367 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":10465,"bootTime":1628878638,"procs":152,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:11:42.831684   14367 start.go:121] virtualization: kvm guest
	I0813 21:11:42.833942   14367 out.go:177] * [newest-cni-20210813210910-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:11:42.835433   14367 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:11:42.834049   14367 notify.go:169] Checking for updates...
	I0813 21:11:42.836843   14367 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:11:42.838221   14367 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:11:42.839571   14367 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:11:42.839955   14367 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:11:42.840311   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:11:42.840349   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:42.850627   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39103
	I0813 21:11:42.851039   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:42.851531   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:11:42.851553   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:42.851914   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:42.852078   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:42.852243   14367 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:11:42.852539   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:11:42.852586   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:42.862427   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37207
	I0813 21:11:42.862794   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:42.863217   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:11:42.863238   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:42.863560   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:42.863722   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:42.890871   14367 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 21:11:42.890896   14367 start.go:278] selected driver: kvm2
	I0813 21:11:42.890901   14367 start.go:751] validating driver "kvm2" against &{Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[Metr
icsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:11:42.891038   14367 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:11:42.892035   14367 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:11:42.892205   14367 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:11:42.902128   14367 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:11:42.902465   14367 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 21:11:42.902493   14367 cni.go:93] Creating CNI manager for ""
	I0813 21:11:42.902501   14367 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:11:42.902511   14367 start_flags.go:277] config:
	{Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false
default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:11:42.902637   14367 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:11:42.904436   14367 out.go:177] * Starting control plane node newest-cni-20210813210910-30853 in cluster newest-cni-20210813210910-30853
	I0813 21:11:42.904454   14367 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:11:42.904476   14367 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 21:11:42.904493   14367 cache.go:56] Caching tarball of preloaded images
	I0813 21:11:42.904594   14367 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 21:11:42.904611   14367 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0813 21:11:42.904745   14367 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json ...
	I0813 21:11:42.904886   14367 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:11:42.904907   14367 start.go:313] acquiring machines lock for newest-cni-20210813210910-30853: {Name:mk2b036d89cf37cc0152d0a0c02b02b678e47b0f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:11:42.904968   14367 start.go:317] acquired machines lock for "newest-cni-20210813210910-30853" in 47.215µs
	I0813 21:11:42.904982   14367 start.go:93] Skipping create...Using existing machine configuration
	I0813 21:11:42.904989   14367 fix.go:55] fixHost starting: 
	I0813 21:11:42.905255   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:11:42.905284   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:42.914701   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0813 21:11:42.915142   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:42.915577   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:11:42.915601   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:42.915893   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:42.916055   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:42.916192   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:11:42.931954   14367 fix.go:108] recreateIfNeeded on newest-cni-20210813210910-30853: state=Stopped err=<nil>
	I0813 21:11:42.931997   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	W0813 21:11:42.932163   14367 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 21:11:42.934231   14367 out.go:177] * Restarting existing kvm2 VM for "newest-cni-20210813210910-30853" ...
	I0813 21:11:42.934255   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Start
	I0813 21:11:42.934377   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring networks are active...
	I0813 21:11:42.936300   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring network default is active
	I0813 21:11:42.936569   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Ensuring network mk-newest-cni-20210813210910-30853 is active
	I0813 21:11:42.936859   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Getting domain xml...
	I0813 21:11:42.938500   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Creating domain...
	I0813 21:11:43.354989   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Waiting to get IP...
	I0813 21:11:43.355874   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:43.356336   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Found IP for machine: 192.168.39.210
	I0813 21:11:43.356359   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Reserving static IP address...
	I0813 21:11:43.356376   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has current primary IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:43.356824   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "newest-cni-20210813210910-30853", mac: "52:54:00:22:60:9f", ip: "192.168.39.210"} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:43.356867   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | skip adding static IP to network mk-newest-cni-20210813210910-30853 - found existing host DHCP lease matching {name: "newest-cni-20210813210910-30853", mac: "52:54:00:22:60:9f", ip: "192.168.39.210"}
	I0813 21:11:43.356881   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Reserved static IP address: 192.168.39.210
	I0813 21:11:43.356900   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Waiting for SSH to be available...
	I0813 21:11:43.356952   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Getting to WaitForSSH function...
	I0813 21:11:43.361283   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:43.361723   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:09:25 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:43.361750   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:43.361884   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using SSH client type: external
	I0813 21:11:43.361912   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa (-rw-------)
	I0813 21:11:43.361950   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:11:43.361964   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | About to run SSH command:
	I0813 21:11:43.361999   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | exit 0
	I0813 21:11:55.526780   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | SSH cmd err, output: <nil>: 
	I0813 21:11:55.527178   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetConfigRaw
	I0813 21:11:55.527809   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:11:55.532715   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.533039   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:55.533072   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.533357   14367 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/config.json ...
	I0813 21:11:55.533572   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:55.533782   14367 machine.go:88] provisioning docker machine ...
	I0813 21:11:55.533807   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:55.533995   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:11:55.534156   14367 buildroot.go:166] provisioning hostname "newest-cni-20210813210910-30853"
	I0813 21:11:55.534181   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:11:55.534310   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:55.538532   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.538833   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:55.538884   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.538981   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:55.539111   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:55.539255   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:55.539365   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:55.539527   14367 main.go:130] libmachine: Using SSH client type: native
	I0813 21:11:55.539747   14367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:11:55.539769   14367 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210813210910-30853 && echo "newest-cni-20210813210910-30853" | sudo tee /etc/hostname
	I0813 21:11:55.703412   14367 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210813210910-30853
	
	I0813 21:11:55.703444   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:55.708657   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.708940   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:55.708973   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.709072   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:55.709238   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:55.709378   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:55.709487   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:55.709631   14367 main.go:130] libmachine: Using SSH client type: native
	I0813 21:11:55.709797   14367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:11:55.709817   14367 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210813210910-30853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210813210910-30853/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210813210910-30853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:11:55.868176   14367 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:11:55.868212   14367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:11:55.868238   14367 buildroot.go:174] setting up certificates
	I0813 21:11:55.868253   14367 provision.go:83] configureAuth start
	I0813 21:11:55.868267   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetMachineName
	I0813 21:11:55.868549   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:11:55.873683   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.874036   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:55.874076   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.874134   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:55.878497   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.878811   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:55.878838   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.878968   14367 provision.go:138] copyHostCerts
	I0813 21:11:55.879035   14367 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:11:55.879046   14367 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:11:55.879102   14367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:11:55.879218   14367 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:11:55.879233   14367 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:11:55.879257   14367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:11:55.879310   14367 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:11:55.879317   14367 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:11:55.879335   14367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 21:11:55.879375   14367 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210813210910-30853 san=[192.168.39.210 192.168.39.210 localhost 127.0.0.1 minikube newest-cni-20210813210910-30853]
	I0813 21:11:55.964045   14367 provision.go:172] copyRemoteCerts
	I0813 21:11:55.964097   14367 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:11:55.964133   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:55.968772   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.969026   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:55.969055   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:55.969181   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:55.969305   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:55.969455   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:55.969568   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:11:56.057985   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:11:56.073617   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 21:11:56.089176   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:11:56.105376   14367 provision.go:86] duration metric: configureAuth took 237.110908ms
	I0813 21:11:56.105403   14367 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:11:56.105565   14367 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:11:56.105657   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:56.110786   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.111110   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:56.111138   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.111272   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:56.111428   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:56.111608   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:56.111776   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:56.111944   14367 main.go:130] libmachine: Using SSH client type: native
	I0813 21:11:56.112123   14367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:11:56.112140   14367 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 21:11:56.692599   14367 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 21:11:56.692630   14367 machine.go:91] provisioned docker machine in 1.15883504s
	I0813 21:11:56.692649   14367 start.go:267] post-start starting for "newest-cni-20210813210910-30853" (driver="kvm2")
	I0813 21:11:56.692658   14367 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:11:56.692680   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:56.692996   14367 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:11:56.693027   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:56.698055   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.698335   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:56.698361   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.698462   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:56.698675   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:56.698881   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:56.699034   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:11:56.786765   14367 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:11:56.791302   14367 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:11:56.791324   14367 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:11:56.791377   14367 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:11:56.791562   14367 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem -> 308532.pem in /etc/ssl/certs
	I0813 21:11:56.791658   14367 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:11:56.798543   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:11:56.814799   14367 start.go:270] post-start completed in 122.13662ms
	I0813 21:11:56.814834   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:56.815087   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:56.820332   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.820647   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:56.820675   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.820808   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:56.820980   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:56.821166   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:56.821302   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:56.821487   14367 main.go:130] libmachine: Using SSH client type: native
	I0813 21:11:56.821671   14367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0813 21:11:56.821684   14367 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:11:56.943374   14367 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628889116.888575334
	
	I0813 21:11:56.943398   14367 fix.go:212] guest clock: 1628889116.888575334
	I0813 21:11:56.943406   14367 fix.go:225] Guest: 2021-08-13 21:11:56.888575334 +0000 UTC Remote: 2021-08-13 21:11:56.815068517 +0000 UTC m=+14.062769203 (delta=73.506817ms)
	I0813 21:11:56.943465   14367 fix.go:196] guest clock delta is within tolerance: 73.506817ms
	I0813 21:11:56.943472   14367 fix.go:57] fixHost completed within 14.038482603s
	I0813 21:11:56.943479   14367 start.go:80] releasing machines lock for "newest-cni-20210813210910-30853", held for 14.038502672s
	I0813 21:11:56.943518   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:56.943777   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:11:56.948878   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.949105   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:56.949141   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.949294   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:56.949480   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:56.949924   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:11:56.950226   14367 ssh_runner.go:149] Run: systemctl --version
	I0813 21:11:56.950256   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:56.950288   14367 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:11:56.950336   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:11:56.954753   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.955114   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:56.955144   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.955201   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:56.955377   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:56.955553   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:56.955683   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:11:56.955957   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.956306   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:11:56.956339   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:11:56.956470   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:11:56.956615   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:11:56.956754   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:11:56.956887   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:11:57.049083   14367 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:11:57.049188   14367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:12:01.075721   14367 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.026505189s)
	I0813 21:12:01.075899   14367 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 21:12:01.075949   14367 ssh_runner.go:149] Run: which lz4
	I0813 21:12:01.080367   14367 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:12:01.084549   14367 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:12:01.084574   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (590981257 bytes)
	I0813 21:12:03.736918   14367 crio.go:362] Took 2.656574 seconds to copy over tarball
	I0813 21:12:03.736981   14367 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:12:08.978441   14367 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.241433608s)
	I0813 21:12:08.978475   14367 crio.go:369] Took 5.241529 seconds t extract the tarball
	I0813 21:12:08.978487   14367 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:12:09.018090   14367 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 21:12:09.030205   14367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 21:12:09.040861   14367 docker.go:153] disabling docker service ...
	I0813 21:12:09.040916   14367 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:12:09.052115   14367 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:12:09.061719   14367 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:12:09.206812   14367 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:12:09.326373   14367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:12:09.337093   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:12:09.349902   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 21:12:09.357809   14367 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:12:09.364087   14367 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:12:09.364137   14367 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:12:09.377607   14367 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:12:09.384218   14367 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:12:09.506260   14367 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 21:12:09.784512   14367 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 21:12:09.784591   14367 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 21:12:09.791238   14367 start.go:413] Will wait 60s for crictl version
	I0813 21:12:09.791288   14367 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:12:09.821889   14367 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 21:12:09.821953   14367 ssh_runner.go:149] Run: crio --version
	I0813 21:12:09.891924   14367 ssh_runner.go:149] Run: crio --version
	I0813 21:12:11.735003   14367 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.2 ...
	I0813 21:12:11.735058   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetIP
	I0813 21:12:11.740625   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:11.741006   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:12:11.741030   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:11.741248   14367 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 21:12:11.746768   14367 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:12:13.851300   14367 out.go:177]   - kubelet.network-plugin=cni
	I0813 21:12:13.853346   14367 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0813 21:12:13.853431   14367 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 21:12:13.853508   14367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:12:13.897566   14367 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:12:13.897588   14367 crio.go:333] Images already preloaded, skipping extraction
	I0813 21:12:13.897634   14367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:12:13.927806   14367 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 21:12:13.927832   14367 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:12:13.927899   14367 ssh_runner.go:149] Run: crio config
	I0813 21:12:14.192898   14367 cni.go:93] Creating CNI manager for ""
	I0813 21:12:14.192926   14367 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:12:14.192939   14367 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0813 21:12:14.192958   14367 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.39.210 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210813210910-30853 NodeName:newest-cni-20210813210910-30853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-el
ect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.210 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:12:14.193109   14367 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "newest-cni-20210813210910-30853"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:12:14.193215   14367 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210813210910-30853 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.210 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:12:14.193279   14367 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 21:12:14.201392   14367 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:12:14.201465   14367 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:12:14.208876   14367 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (554 bytes)
	I0813 21:12:14.219827   14367 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 21:12:14.230627   14367 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I0813 21:12:14.242277   14367 ssh_runner.go:149] Run: grep 192.168.39.210	control-plane.minikube.internal$ /etc/hosts
	I0813 21:12:14.245984   14367 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:12:14.256040   14367 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853 for IP: 192.168.39.210
	I0813 21:12:14.256095   14367 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:12:14.256115   14367 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:12:14.256166   14367 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/client.key
	I0813 21:12:14.256189   14367 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key.6213553a
	I0813 21:12:14.256210   14367 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key
	I0813 21:12:14.256310   14367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem (1338 bytes)
	W0813 21:12:14.256353   14367 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853_empty.pem, impossibly tiny 0 bytes
	I0813 21:12:14.256370   14367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:12:14.256397   14367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:12:14.256422   14367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:12:14.256450   14367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 21:12:14.256497   14367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem (1708 bytes)
	I0813 21:12:14.257477   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:12:14.273859   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:12:14.290500   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:12:14.306545   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813210910-30853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 21:12:14.322565   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:12:14.338996   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 21:12:14.354718   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:12:14.370471   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 21:12:14.386542   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/30853.pem --> /usr/share/ca-certificates/30853.pem (1338 bytes)
	I0813 21:12:14.402369   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/308532.pem --> /usr/share/ca-certificates/308532.pem (1708 bytes)
	I0813 21:12:14.418038   14367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:12:14.434102   14367 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:12:14.445953   14367 ssh_runner.go:149] Run: openssl version
	I0813 21:12:14.451937   14367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30853.pem && ln -fs /usr/share/ca-certificates/30853.pem /etc/ssl/certs/30853.pem"
	I0813 21:12:14.459153   14367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30853.pem
	I0813 21:12:14.463692   14367 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:18 /usr/share/ca-certificates/30853.pem
	I0813 21:12:14.463732   14367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30853.pem
	I0813 21:12:14.469219   14367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30853.pem /etc/ssl/certs/51391683.0"
	I0813 21:12:14.476824   14367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/308532.pem && ln -fs /usr/share/ca-certificates/308532.pem /etc/ssl/certs/308532.pem"
	I0813 21:12:14.484315   14367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/308532.pem
	I0813 21:12:14.488839   14367 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:18 /usr/share/ca-certificates/308532.pem
	I0813 21:12:14.488880   14367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308532.pem
	I0813 21:12:14.494335   14367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/308532.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:12:14.501820   14367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:12:14.509124   14367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:12:14.513481   14367 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:08 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:12:14.513509   14367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:12:14.518990   14367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:12:14.526670   14367 kubeadm.go:390] StartCluster: {Name:newest-cni-20210813210910-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.
0-rc.0 ClusterName:newest-cni-20210813210910-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.dom
ain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:12:14.526755   14367 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 21:12:14.526785   14367 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:12:14.558524   14367 cri.go:76] found id: ""
	I0813 21:12:14.558576   14367 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:12:14.566751   14367 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:12:14.566777   14367 kubeadm.go:600] restartCluster start
	I0813 21:12:14.566871   14367 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:12:14.573499   14367 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:14.574151   14367 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210813210910-30853" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:12:14.574242   14367 kubeconfig.go:128] "newest-cni-20210813210910-30853" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:12:14.574539   14367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:14.576626   14367 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:12:14.582645   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:14.582686   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:14.591111   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:14.791510   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:14.791586   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:14.801142   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:14.991370   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:14.991447   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:15.000584   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:15.191896   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:15.191970   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:15.201129   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:15.391496   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:15.391569   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:15.401482   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:15.591854   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:15.591936   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:15.600994   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:15.791233   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:15.791296   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:15.800423   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:15.991725   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:15.991807   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:16.000869   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:16.192288   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:16.192396   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:16.201776   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:16.392113   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:16.392184   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:16.401017   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:16.591247   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:16.591333   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:16.600685   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:16.791939   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:16.792040   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:16.801253   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:16.991530   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:16.991617   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:17.000621   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:17.191933   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:17.192020   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:17.201085   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:17.391391   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:17.391478   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:17.400828   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:17.592213   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:17.592318   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:17.601566   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:17.601584   14367 api_server.go:164] Checking apiserver status ...
	I0813 21:12:17.601629   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:12:17.609718   14367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:12:17.609734   14367 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:12:17.609742   14367 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:12:17.609756   14367 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 21:12:17.609808   14367 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:12:17.643300   14367 cri.go:76] found id: ""
	I0813 21:12:17.643371   14367 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:12:17.657320   14367 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:12:17.665940   14367 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:12:17.665987   14367 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:12:17.672626   14367 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:12:17.672644   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:12:17.812928   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:12:19.022414   14367 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.209447891s)
	I0813 21:12:19.022450   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:12:19.276932   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:12:19.417749   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:12:19.520816   14367 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:12:19.520881   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:20.034374   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:20.534125   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:21.033925   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:21.534509   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:22.033961   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:22.534630   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:23.034407   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:23.534686   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:24.034496   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:24.533962   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:25.033759   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:25.533782   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:26.033867   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:26.534648   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:27.034478   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:27.534627   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:28.034344   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:28.046824   14367 api_server.go:70] duration metric: took 8.526007952s to wait for apiserver process to appear ...
	I0813 21:12:28.046849   14367 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:12:28.046873   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:33.047532   14367 api_server.go:255] stopped: https://192.168.39.210:8443/healthz: Get "https://192.168.39.210:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:12:33.548327   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:38.549533   14367 api_server.go:255] stopped: https://192.168.39.210:8443/healthz: Get "https://192.168.39.210:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:12:39.048081   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:44.049250   14367 api_server.go:255] stopped: https://192.168.39.210:8443/healthz: Get "https://192.168.39.210:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:12:44.547817   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:49.073312   14367 api_server.go:265] https://192.168.39.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:12:49.073339   14367 api_server.go:101] status: https://192.168.39.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:12:49.547901   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:49.557738   14367 api_server.go:265] https://192.168.39.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:12:49.557767   14367 api_server.go:101] status: https://192.168.39.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:12:50.048422   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:50.078759   14367 api_server.go:265] https://192.168.39.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:12:50.078800   14367 api_server.go:101] status: https://192.168.39.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:12:50.548386   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:50.555157   14367 api_server.go:265] https://192.168.39.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:12:50.555185   14367 api_server.go:101] status: https://192.168.39.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:12:51.047708   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:51.064435   14367 api_server.go:265] https://192.168.39.210:8443/healthz returned 200:
	ok
	I0813 21:12:51.088794   14367 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:12:51.088819   14367 api_server.go:129] duration metric: took 23.041952464s to wait for apiserver health ...
	I0813 21:12:51.088830   14367 cni.go:93] Creating CNI manager for ""
	I0813 21:12:51.088848   14367 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 21:12:51.090600   14367 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:12:51.090659   14367 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:12:51.114542   14367 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:12:51.163278   14367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:12:51.180593   14367 system_pods.go:59] 9 kube-system pods found
	I0813 21:12:51.180640   14367 system_pods.go:61] "coredns-78fcd69978-42frp" [ffc12ff0-fe4e-422b-ae81-83f17416e379] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:12:51.180647   14367 system_pods.go:61] "coredns-78fcd69978-bc587" [0d2dab50-994b-4314-8922-0e8a913a9b26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:12:51.180654   14367 system_pods.go:61] "etcd-newest-cni-20210813210910-30853" [a6811fb7-a94c-45db-91d0-34c033aa1eab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0813 21:12:51.180659   14367 system_pods.go:61] "kube-apiserver-newest-cni-20210813210910-30853" [bdcdda0b-8c06-4c71-8f0a-66d55d331267] Running
	I0813 21:12:51.180665   14367 system_pods.go:61] "kube-controller-manager-newest-cni-20210813210910-30853" [374fba93-8efe-439f-8aec-50ae02d227e3] Running
	I0813 21:12:51.180672   14367 system_pods.go:61] "kube-proxy-qt9ld" [4e36061f-0559-4cde-9b0a-b5cb328d0d76] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 21:12:51.180679   14367 system_pods.go:61] "kube-scheduler-newest-cni-20210813210910-30853" [bdf4950a-8d5e-434c-8c99-20e475c71f65] Running
	I0813 21:12:51.180683   14367 system_pods.go:61] "metrics-server-7c784ccb57-mrklk" [ad347f93-2bcc-4e1c-b82c-66f4854c46d2] Pending
	I0813 21:12:51.180688   14367 system_pods.go:61] "storage-provisioner" [5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 21:12:51.180695   14367 system_pods.go:74] duration metric: took 17.398617ms to wait for pod list to return data ...
	I0813 21:12:51.180702   14367 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:12:51.195318   14367 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:12:51.195350   14367 node_conditions.go:123] node cpu capacity is 2
	I0813 21:12:51.195364   14367 node_conditions.go:105] duration metric: took 14.656302ms to run NodePressure ...
	I0813 21:12:51.195384   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:12:52.312553   14367 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.117144753s)
	I0813 21:12:52.312593   14367 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:12:52.356977   14367 ops.go:34] apiserver oom_adj: -16
	I0813 21:12:52.357001   14367 kubeadm.go:604] restartCluster took 37.790217793s
	I0813 21:12:52.357011   14367 kubeadm.go:392] StartCluster complete in 37.83034654s
	I0813 21:12:52.357032   14367 settings.go:142] acquiring lock: {Name:mk53bc8e7bf3f509cc94ba4120b090d2c255a81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:52.357142   14367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:12:52.357747   14367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mke5bcf0339b3e25972d16c3b053b0728d6abad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:52.364948   14367 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210813210910-30853" rescaled to 1
	I0813 21:12:52.365013   14367 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:12:52.365042   14367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:12:52.365064   14367 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:12:52.366957   14367 out.go:177] * Verifying Kubernetes components...
	I0813 21:12:52.365142   14367 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210813210910-30853"
	I0813 21:12:52.367043   14367 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210813210910-30853"
	I0813 21:12:52.367055   14367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:12:52.365153   14367 addons.go:59] Setting dashboard=true in profile "newest-cni-20210813210910-30853"
	I0813 21:12:52.367068   14367 addons.go:135] Setting addon dashboard=true in "newest-cni-20210813210910-30853"
	W0813 21:12:52.367075   14367 addons.go:147] addon dashboard should already be in state true
	I0813 21:12:52.367104   14367 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	W0813 21:12:52.367057   14367 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:12:52.367143   14367 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:12:52.365163   14367 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210813210910-30853"
	I0813 21:12:52.367189   14367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210813210910-30853"
	I0813 21:12:52.365171   14367 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210813210910-30853"
	I0813 21:12:52.367242   14367 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210813210910-30853"
	W0813 21:12:52.367259   14367 addons.go:147] addon metrics-server should already be in state true
	I0813 21:12:52.367313   14367 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:12:52.365246   14367 config.go:177] Loaded profile config "newest-cni-20210813210910-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 21:12:52.367562   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.367590   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.367602   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.367602   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.367631   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.367706   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.367786   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.367825   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.379237   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34693
	I0813 21:12:52.379715   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.381362   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.381386   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.381428   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44281
	I0813 21:12:52.381814   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.381961   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.382458   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.382494   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.382686   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37091
	I0813 21:12:52.382786   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.382804   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.382823   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45369
	I0813 21:12:52.383120   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.383163   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.383200   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.383539   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.383556   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.383668   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.383690   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.383729   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.383768   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.383943   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.384021   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.384516   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.384555   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.384664   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:12:52.394410   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39517
	I0813 21:12:52.395550   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.396103   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.396129   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.396535   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.396744   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:12:52.400277   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:12:52.402440   14367 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:12:52.402559   14367 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:12:52.402579   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:12:52.402600   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:12:52.405151   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43155
	I0813 21:12:52.405530   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.405998   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.406023   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.406433   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.406615   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:12:52.407218   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0813 21:12:52.407647   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.408083   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.408109   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.408439   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.408620   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:12:52.409099   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.409688   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:12:52.409714   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.409839   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:12:52.410000   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:12:52.410152   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:12:52.410160   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:12:52.410298   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:12:52.412246   14367 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:12:52.411625   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:12:52.413764   14367 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:12:52.413833   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:12:52.415179   14367 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:12:52.415234   14367 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:12:52.415246   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:12:52.413846   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:12:52.415272   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:12:52.415281   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:12:52.420440   14367 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210813210910-30853"
	W0813 21:12:52.420462   14367 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:12:52.420493   14367 host.go:66] Checking if "newest-cni-20210813210910-30853" exists ...
	I0813 21:12:52.420827   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.420870   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.421205   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.421629   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:12:52.421659   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.421870   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:12:52.422060   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:12:52.422186   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:12:52.422286   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.422314   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:12:52.422724   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:12:52.422755   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.422901   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:12:52.423036   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:12:52.423159   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:12:52.423286   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:12:52.431929   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40947
	I0813 21:12:52.432276   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.432667   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.432688   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.432966   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.433564   14367 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 21:12:52.433609   14367 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:52.470073   14367 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43275
	I0813 21:12:52.470530   14367 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:52.471088   14367 main.go:130] libmachine: Using API Version  1
	I0813 21:12:52.471110   14367 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:52.471445   14367 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:52.471627   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetState
	I0813 21:12:52.474940   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .DriverName
	I0813 21:12:52.475188   14367 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:12:52.475204   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:12:52.475220   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHHostname
	I0813 21:12:52.480469   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.480823   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:60:9f", ip: ""} in network mk-newest-cni-20210813210910-30853: {Iface:virbr1 ExpiryTime:2021-08-13 22:11:53 +0000 UTC Type:0 Mac:52:54:00:22:60:9f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:newest-cni-20210813210910-30853 Clientid:01:52:54:00:22:60:9f}
	I0813 21:12:52.480853   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | domain newest-cni-20210813210910-30853 has defined IP address 192.168.39.210 and MAC address 52:54:00:22:60:9f in network mk-newest-cni-20210813210910-30853
	I0813 21:12:52.480971   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHPort
	I0813 21:12:52.481124   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHKeyPath
	I0813 21:12:52.481293   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .GetSSHUsername
	I0813 21:12:52.481434   14367 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813210910-30853/id_rsa Username:docker}
	I0813 21:12:52.601256   14367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:12:52.683739   14367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:12:52.683768   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:12:52.703159   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:12:52.703192   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:12:52.748843   14367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:12:52.831110   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:12:52.831139   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:12:52.835435   14367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:12:52.835459   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:12:53.104100   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:12:53.104124   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:12:53.157894   14367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:12:53.157919   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:12:53.204915   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:12:53.204946   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:12:53.222988   14367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:12:53.260055   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:12:53.260085   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:12:53.493428   14367 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.128352719s)
	I0813 21:12:53.493517   14367 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (1.126438109s)
	I0813 21:12:53.493561   14367 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:12:53.493614   14367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:12:53.493524   14367 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 21:12:53.603564   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:12:53.603589   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:12:53.956970   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:12:53.956999   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:12:54.139515   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:12:54.139539   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:12:54.251231   14367 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:12:54.251259   14367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:12:54.508265   14367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:12:54.552926   14367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.951611096s)
	I0813 21:12:54.552988   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:54.553025   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:54.553309   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:12:54.553326   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:54.553342   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:54.553360   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:54.553373   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:54.553589   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:54.553603   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:54.553629   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:12:54.926950   14367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.178060823s)
	I0813 21:12:54.927005   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:54.927018   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:54.927304   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:54.927374   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:12:54.927395   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:54.927408   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:54.927420   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:54.927628   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:54.927647   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:54.927650   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:12:54.927670   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:54.927686   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:54.927912   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:54.927923   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:55.173496   14367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.950447285s)
	I0813 21:12:55.173554   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:55.173571   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:55.173579   14367 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.679944797s)
	I0813 21:12:55.173598   14367 api_server.go:70] duration metric: took 2.808558842s to wait for apiserver process to appear ...
	I0813 21:12:55.173604   14367 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:12:55.173613   14367 api_server.go:239] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0813 21:12:55.173905   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:12:55.173919   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:55.173936   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:55.173957   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:55.173972   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:55.174167   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:55.174201   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:55.174217   14367 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210813210910-30853"
	I0813 21:12:55.181805   14367 api_server.go:265] https://192.168.39.210:8443/healthz returned 200:
	ok
	I0813 21:12:55.183106   14367 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:12:55.183121   14367 api_server.go:129] duration metric: took 9.513019ms to wait for apiserver health ...
	I0813 21:12:55.183129   14367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:12:55.194552   14367 system_pods.go:59] 8 kube-system pods found
	I0813 21:12:55.194577   14367 system_pods.go:61] "coredns-78fcd69978-bc587" [0d2dab50-994b-4314-8922-0e8a913a9b26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:12:55.194582   14367 system_pods.go:61] "etcd-newest-cni-20210813210910-30853" [a6811fb7-a94c-45db-91d0-34c033aa1eab] Running
	I0813 21:12:55.194587   14367 system_pods.go:61] "kube-apiserver-newest-cni-20210813210910-30853" [bdcdda0b-8c06-4c71-8f0a-66d55d331267] Running
	I0813 21:12:55.194595   14367 system_pods.go:61] "kube-controller-manager-newest-cni-20210813210910-30853" [374fba93-8efe-439f-8aec-50ae02d227e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 21:12:55.194604   14367 system_pods.go:61] "kube-proxy-qt9ld" [4e36061f-0559-4cde-9b0a-b5cb328d0d76] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 21:12:55.194612   14367 system_pods.go:61] "kube-scheduler-newest-cni-20210813210910-30853" [bdf4950a-8d5e-434c-8c99-20e475c71f65] Running
	I0813 21:12:55.194623   14367 system_pods.go:61] "metrics-server-7c784ccb57-mrklk" [ad347f93-2bcc-4e1c-b82c-66f4854c46d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:55.194631   14367 system_pods.go:61] "storage-provisioner" [5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 21:12:55.194645   14367 system_pods.go:74] duration metric: took 11.511833ms to wait for pod list to return data ...
	I0813 21:12:55.194653   14367 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:12:55.197911   14367 default_sa.go:45] found service account: "default"
	I0813 21:12:55.197931   14367 default_sa.go:55] duration metric: took 3.2722ms for default service account to be created ...
	I0813 21:12:55.197940   14367 kubeadm.go:547] duration metric: took 2.832901179s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0813 21:12:55.197966   14367 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:12:55.201445   14367 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:12:55.201468   14367 node_conditions.go:123] node cpu capacity is 2
	I0813 21:12:55.201482   14367 node_conditions.go:105] duration metric: took 3.51037ms to run NodePressure ...
	I0813 21:12:55.201491   14367 start.go:231] waiting for startup goroutines ...
	I0813 21:12:55.694029   14367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.185700485s)
	I0813 21:12:55.694135   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:55.694155   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:55.694470   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:12:55.694528   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:55.694553   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:55.694564   14367 main.go:130] libmachine: Making call to close driver server
	I0813 21:12:55.694577   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) Calling .Close
	I0813 21:12:55.694846   14367 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:12:55.694875   14367 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:12:55.694906   14367 main.go:130] libmachine: (newest-cni-20210813210910-30853) DBG | Closing plugin on server side
	I0813 21:12:55.696782   14367 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 21:12:55.696806   14367 addons.go:344] enableAddons completed in 3.331747172s
	I0813 21:12:55.741873   14367 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 21:12:55.743280   14367 out.go:177] 
	W0813 21:12:55.743425   14367 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 21:12:55.744924   14367 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:12:55.746315   14367 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813210910-30853" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 21:11:53 UTC, end at Fri 2021-08-13 21:13:40 UTC. --
	Aug 13 21:13:39 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:39.973127785Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,StartedAt:1628889174014587365,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/containers/storage-provisioner/3b35aef2,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/volumes/kubernetes.io~projected/kube-api-access-pd5tc,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6/storage
-provisioner/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=0d2bc3d6-0d8e-47a3-9f34-69d2ae305a14 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.521280887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3f6889da-e551-4343-ac1b-21ade402688b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.521346087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3f6889da-e551-4343-ac1b-21ade402688b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.521560859Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3f6889da-e551-4343-ac1b-21ade402688b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.565475034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7b2f98e4-1368-455b-b25a-105fcc5ea99d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.565542147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7b2f98e4-1368-455b-b25a-105fcc5ea99d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.565861383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7b2f98e4-1368-455b-b25a-105fcc5ea99d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.610500771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2dede3d3-145d-4753-893a-594234119595 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.610564383Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2dede3d3-145d-4753-893a-594234119595 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.611228442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2dede3d3-145d-4753-893a-594234119595 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.648905847Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=39bed4cf-39e7-4f64-8d27-b2aa9d98f521 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.649127126Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=39bed4cf-39e7-4f64-8d27-b2aa9d98f521 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.649318680Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=39bed4cf-39e7-4f64-8d27-b2aa9d98f521 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.684858503Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6eeecce2-f286-4be8-a0e4-bd5ffe3c97d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.684995001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6eeecce2-f286-4be8-a0e4-bd5ffe3c97d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.685165606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6eeecce2-f286-4be8-a0e4-bd5ffe3c97d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.724237614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=26e01b13-30a7-4979-a8f1-83af56a11ea5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.724376469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=26e01b13-30a7-4979-a8f1-83af56a11ea5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.724757284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=26e01b13-30a7-4979-a8f1-83af56a11ea5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.760004686Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5ac776f9-e86d-4598-9261-b907a0e29852 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.760142390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5ac776f9-e86d-4598-9261-b907a0e29852 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.760301672Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5ac776f9-e86d-4598-9261-b907a0e29852 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.796780098Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0f90da6e-cc23-4c09-b462-680a0450768e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.796928192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0f90da6e-cc23-4c09-b462-680a0450768e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 21:13:40 newest-cni-20210813210910-30853 crio[2043]: time="2021-08-13 21:13:40.797133325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669,PodSandboxId:7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8,State:CONTAINER_RUNNING,CreatedAt:1628889174119158329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qt9ld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e36061f-0559-4cde-9b0a-b5cb328d0d76,},Annotations:map[string]string{io.kubernetes.container.hash: 9c81cf57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe,PodSandboxId:cd3ff57b787d3c509a71e579051e12f9583281374607e6965872fdce56e4c7ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628889173604162723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6be87df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_RUNNING,CreatedAt:1628889170148091333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203,PodSandboxId:248f7b6f7fd025b14dff9123fdf25fe4a5f1911dccd1854724ad29a029c32995,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,State:CONTAINER_RUNNING,CreatedAt:1628889158440159537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab7e5e84ea4e6309241a6623f47ddd8,},Annotations:map[string]string{io.kubernetes.container.hash: f0960535,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71,PodSandboxId:f547e1140c1402519d54b5d34346b15ecd735d5f2a1362ea8ca4fe218970c882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:184be73276e4e34dc62d3a50f61383aa0b5b8c3e3442deacca01edf00ff0cb9a,State:CONTAINER_RUNNING,CreatedAt:1628889147629888101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b2831a6feaa48869fe13cec6b8ce22,},Annotations:map[string]string{io.kubernetes.container.hash: a0decd21,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78,PodSandboxId:769393d983373f5fb98b10ceb225551b34a365449246b2d5b779e299c08d3054,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:7a76c71d449fe638d80b4beabd0a72ba401a088dddf62b03a25e545b0433cf13,State:CONTAINER_EXITED,CreatedAt:1628889147271783918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb68b72f76f9aae78202c9c8c37cac6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3da1e13c,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291,PodSandboxId:433cba576a12afcac4e6e292585bd001ab30ec205358060a8fc16229926fb534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:04d03eb7fcdde91f49b8232a1e4b7737e3efac762df2862c1a4fe9b219af2212,State:CONTAINER_RUNNING,CreatedAt:1628889147070852027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-newest-cni-20210813210910-30853,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32688baa2c6a65d13ce71d2e854f4832,},Annotations:map[string]string{io.kubernetes.container.hash: ffb6a91b,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0f90da6e-cc23-4c09-b462-680a0450768e name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                     CREATED              STATE               NAME                      ATTEMPT             POD ID
	e2863ab689591       ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c                          46 seconds ago       Running             kube-proxy                1                   7d4ecadfd7f19
	21ee344d5f9ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                          47 seconds ago       Running             storage-provisioner       0                   cd3ff57b787d3
	f39aba8b3d625       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c                          50 seconds ago       Running             kube-controller-manager   2                   769393d983373
	81f490d516432       k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d   About a minute ago   Running             etcd                      1                   248f7b6f7fd02
	5bbe5f8c98c37       7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75                          About a minute ago   Running             kube-scheduler            1                   f547e1140c140
	f0de6c0b2f66a       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c                          About a minute ago   Exited              kube-controller-manager   1                   769393d983373
	09c7d19e2c150       b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a                          About a minute ago   Running             kube-apiserver            1                   433cba576a12a
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.091915] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.801628] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000020] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.309380] systemd-fstab-generator[1162]: Ignoring "noauto" for root device
	[  +0.037434] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.012704] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1720 comm=systemd-network
	[  +0.553475] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.340981] vboxguest: loading out-of-tree module taints kernel.
	[  +0.005458] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug13 21:12] systemd-fstab-generator[2140]: Ignoring "noauto" for root device
	[  +0.129930] systemd-fstab-generator[2153]: Ignoring "noauto" for root device
	[  +0.170141] systemd-fstab-generator[2180]: Ignoring "noauto" for root device
	[  +9.751619] systemd-fstab-generator[2372]: Ignoring "noauto" for root device
	[ +31.719822] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.007272] kauditd_printk_skb: 107 callbacks suppressed
	[  +0.721371] systemd-fstab-generator[3536]: Ignoring "noauto" for root device
	[  +0.824042] systemd-fstab-generator[3590]: Ignoring "noauto" for root device
	[  +0.942121] systemd-fstab-generator[3645]: Ignoring "noauto" for root device
	[Aug13 21:13] NFSD: Unable to end grace period: -110
	
	* 
	* ==> etcd [81f490d51643254724d7be38ee21b4a9a29fcb705b2a7e44a8904194109f5203] <==
	* {"level":"info","ts":"2021-08-13T21:12:38.772Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-08-13T21:12:38.775Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"5a5dd032def1271d","local-server-version":"3.5.0","cluster-id":"989b3f6bb1f1f8ce","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T21:12:38.779Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-13T21:12:38.779Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"5a5dd032def1271d","initial-advertise-peer-urls":["https://192.168.39.210:2380"],"listen-peer-urls":["https://192.168.39.210:2380"],"advertise-client-urls":["https://192.168.39.210:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.210:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-13T21:12:38.779Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-13T21:12:38.779Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"5a5dd032def1271d","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-08-13T21:12:38.780Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"info","ts":"2021-08-13T21:12:38.780Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.39.210:2380"}
	{"level":"info","ts":"2021-08-13T21:12:38.780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d switched to configuration voters=(6511589553154893597)"}
	{"level":"info","ts":"2021-08-13T21:12:38.780Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"989b3f6bb1f1f8ce","local-member-id":"5a5dd032def1271d","added-peer-id":"5a5dd032def1271d","added-peer-peer-urls":["https://192.168.39.210:2380"]}
	{"level":"info","ts":"2021-08-13T21:12:38.781Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"989b3f6bb1f1f8ce","local-member-id":"5a5dd032def1271d","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-08-13T21:12:39.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d is starting a new election at term 2"}
	{"level":"info","ts":"2021-08-13T21:12:39.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d became pre-candidate at term 2"}
	{"level":"info","ts":"2021-08-13T21:12:39.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d received MsgPreVoteResp from 5a5dd032def1271d at term 2"}
	{"level":"info","ts":"2021-08-13T21:12:39.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d became candidate at term 3"}
	{"level":"info","ts":"2021-08-13T21:12:39.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d received MsgVoteResp from 5a5dd032def1271d at term 3"}
	{"level":"info","ts":"2021-08-13T21:12:39.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d became leader at term 3"}
	{"level":"info","ts":"2021-08-13T21:12:39.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5a5dd032def1271d elected leader 5a5dd032def1271d at term 3"}
	{"level":"info","ts":"2021-08-13T21:12:39.458Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"5a5dd032def1271d","local-member-attributes":"{Name:newest-cni-20210813210910-30853 ClientURLs:[https://192.168.39.210:2379]}","request-path":"/0/members/5a5dd032def1271d/attributes","cluster-id":"989b3f6bb1f1f8ce","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-13T21:12:39.458Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T21:12:39.459Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T21:12:39.461Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.210:2379"}
	{"level":"info","ts":"2021-08-13T21:12:39.461Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-13T21:12:39.461Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-13T21:12:39.463Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  21:14:21 up 2 min,  0 users,  load average: 0.66, 0.40, 0.15
	Linux newest-cni-20210813210910-30853 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [09c7d19e2c150b97aa63e34f66713b45d7a295455075260e21369d8d90955291] <==
	* I0813 21:12:49.115844       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0813 21:12:49.115949       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0813 21:12:49.115994       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0813 21:12:49.128554       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0813 21:12:49.128838       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0813 21:12:49.229982       1 cache.go:39] Caches are synced for autoregister controller
	I0813 21:12:49.230133       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0813 21:12:49.230969       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0813 21:12:49.231091       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 21:12:49.232030       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 21:12:49.272786       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0813 21:12:49.305976       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 21:12:50.004094       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 21:12:50.137805       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 21:12:50.139995       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	W0813 21:12:51.138280       1 handler_proxy.go:104] no RequestInfo found in the context
	E0813 21:12:51.138441       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 21:12:51.138596       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 21:12:51.812177       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 21:12:51.876910       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 21:12:52.193096       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 21:12:52.261543       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 21:12:52.281306       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0813 21:12:55.321243       1 controller.go:611] quota admission added evaluator for: namespaces
	
	* 
	* ==> kube-controller-manager [f0de6c0b2f66afb03a9d35732c696df6ce4aed4b182470c5b23796978e606c78] <==
	* 	/usr/local/go/src/bytes/buffer.go:204 +0xbe
	crypto/tls.(*Conn).readFromUntil(0xc00036aa80, 0x5176a20, 0xc00093c070, 0x5, 0xc00093c070, 0x99)
		/usr/local/go/src/crypto/tls/conn.go:798 +0xf3
	crypto/tls.(*Conn).readRecordOrCCS(0xc00036aa80, 0x0, 0x0, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:605 +0x115
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:573
	crypto/tls.(*Conn).Read(0xc00036aa80, 0xc000a53000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:1276 +0x165
	bufio.(*Reader).Read(0xc0006213e0, 0xc0001ad6f8, 0x9, 0x9, 0x99f9cb, 0xc000914c78, 0x4071a5)
		/usr/local/go/src/bufio/bufio.go:227 +0x222
	io.ReadAtLeast(0x516f360, 0xc0006213e0, 0xc0001ad6f8, 0x9, 0x9, 0x9, 0xc000a2f5e0, 0x72199d9e98c000, 0xc000a2f5e0)
		/usr/local/go/src/io/io.go:328 +0x87
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:347
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc0001ad6f8, 0x9, 0x9, 0x516f360, 0xc0006213e0, 0x0, 0x0, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0001ad6c0, 0xc000a31710, 0x0, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000914fa8, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1821 +0xd8
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc000120d80)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1743 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:695 +0x6c5
	
	* 
	* ==> kube-controller-manager [f39aba8b3d625e901b0264d73a76f06c4458396694ae68a2e197a9048fd69d24] <==
	* W0813 21:13:40.631092       1 reflector.go:441] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 21:13:40.631409       1 reflector.go:441] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 21:13:40.631906       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.210:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": http2: client connection lost
	W0813 21:13:51.135090       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.210:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": net/http: TLS handshake timeout
	I0813 21:13:51.513350       1 trace.go:205] Trace[1327602176]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:13:41.510) (total time: 10002ms):
	Trace[1327602176]: [10.002327233s] [10.002327233s] END
	E0813 21:13:51.513484       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://192.168.39.210:8443/api/v1/secrets?resourceVersion=602": net/http: TLS handshake timeout
	I0813 21:13:52.118253       1 trace.go:205] Trace[1327627983]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:13:42.116) (total time: 10001ms):
	Trace[1327627983]: [10.001448291s] [10.001448291s] END
	E0813 21:13:52.118366       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://192.168.39.210:8443/api/v1/serviceaccounts?resourceVersion=599": net/http: TLS handshake timeout
	W0813 21:14:02.138485       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.210:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": net/http: TLS handshake timeout
	I0813 21:14:04.113016       1 trace.go:205] Trace[727840149]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:13:54.110) (total time: 10002ms):
	Trace[727840149]: [10.002145006s] [10.002145006s] END
	E0813 21:14:04.113137       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://192.168.39.210:8443/api/v1/serviceaccounts?resourceVersion=599": net/http: TLS handshake timeout
	I0813 21:14:04.616601       1 trace.go:205] Trace[1291908600]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:13:54.614) (total time: 10001ms):
	Trace[1291908600]: [10.001777676s] [10.001777676s] END
	E0813 21:14:04.616964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://192.168.39.210:8443/api/v1/secrets?resourceVersion=602": net/http: TLS handshake timeout
	W0813 21:14:14.141086       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.210:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": net/http: TLS handshake timeout
	E0813 21:14:14.141302       1 cidr_allocator.go:137] Failed to list all nodes: Get "https://192.168.39.210:8443/api/v1/nodes": failed to get token for kube-system/node-controller: timed out waiting for the condition
	I0813 21:14:18.166534       1 trace.go:205] Trace[2122100593]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:14:08.164) (total time: 10001ms):
	Trace[2122100593]: [10.001621207s] [10.001621207s] END
	E0813 21:14:18.166904       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://192.168.39.210:8443/api/v1/serviceaccounts?resourceVersion=599": net/http: TLS handshake timeout
	I0813 21:14:20.497493       1 trace.go:205] Trace[65890930]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:14:10.495) (total time: 10001ms):
	Trace[65890930]: [10.001479689s] [10.001479689s] END
	E0813 21:14:20.497875       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://192.168.39.210:8443/api/v1/secrets?resourceVersion=602": net/http: TLS handshake timeout
	
	* 
	* ==> kube-proxy [e2863ab6895918c7c6ae73849055108a7e271b8f63d9480ecca2f9ae686b5669] <==
	* I0813 21:12:54.703284       1 server.go:649] Version: v1.22.0-rc.0
	I0813 21:12:54.705773       1 config.go:315] Starting service config controller
	I0813 21:12:54.705886       1 config.go:224] Starting endpoint slice config controller
	I0813 21:12:54.706015       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 21:12:54.706015       1 shared_informer.go:240] Waiting for caches to sync for service config
	E0813 21:12:54.724469       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210813210910-30853.169afa12fcd1ead9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd5f5aa120176, ext:375057157, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210813210910-30853", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Nam
e:"newest-cni-20210813210910-30853", UID:"newest-cni-20210813210910-30853", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210813210910-30853.169afa12fcd1ead9" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 21:12:54.806851       1 shared_informer.go:247] Caches are synced for service config 
	I0813 21:12:54.806879       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	W0813 21:13:40.688346       1 reflector.go:441] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 21:13:40.688550       1 reflector.go:441] k8s.io/client-go/informers/factory.go:134: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	I0813 21:13:51.554586       1 trace.go:205] Trace[301213320]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:13:41.551) (total time: 10002ms):
	Trace[301213320]: [10.00267453s] [10.00267453s] END
	E0813 21:13:51.554864       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=606": net/http: TLS handshake timeout
	I0813 21:13:52.047986       1 trace.go:205] Trace[480042950]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:13:42.045) (total time: 10002ms):
	Trace[480042950]: [10.002019783s] [10.002019783s] END
	E0813 21:13:52.048089       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=528": net/http: TLS handshake timeout
	I0813 21:14:03.835434       1 trace.go:205] Trace[1064133871]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:13:53.832) (total time: 10002ms):
	Trace[1064133871]: [10.00254104s] [10.00254104s] END
	E0813 21:14:03.835553       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=606": net/http: TLS handshake timeout
	I0813 21:14:05.213044       1 trace.go:205] Trace[1445967025]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:13:55.210) (total time: 10002ms):
	Trace[1445967025]: [10.002193774s] [10.002193774s] END
	E0813 21:14:05.213179       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=528": net/http: TLS handshake timeout
	I0813 21:14:19.612001       1 trace.go:205] Trace[15551652]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (13-Aug-2021 21:14:09.610) (total time: 10001ms):
	Trace[15551652]: [10.00161364s] [10.00161364s] END
	E0813 21:14:19.612041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=606": net/http: TLS handshake timeout
	
	* 
	* ==> kube-scheduler [5bbe5f8c98c374f45093ed741627b92b05e7d70e0f123e23b42741db407f9d71] <==
	* W0813 21:12:28.570166       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0813 21:12:29.072718       1 serving.go:347] Generated self-signed cert in-memory
	W0813 21:12:39.589534       1 authentication.go:345] Error looking up in-cluster authentication configuration: Get "https://192.168.39.210:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0813 21:12:39.589582       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 21:12:39.589595       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 21:12:49.076784       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0813 21:12:49.077326       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0813 21:12:49.077160       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0813 21:12:49.091844       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0813 21:12:49.184178       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:12:49.187385       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:12:49.189109       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:12:49.189179       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:12:49.189241       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:49.189380       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:12:49.189475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:12:49.189540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:49.189608       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:12:49.189783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:12:49.189846       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:12:49.189905       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:49.189965       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:12:49.192188       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0813 21:12:50.092229       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:11:53 UTC, end at Fri 2021-08-13 21:14:21 UTC. --
	Aug 13 21:12:49 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:49.765077    2380 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkstk\" (UniqueName: \"kubernetes.io/projected/4e36061f-0559-4cde-9b0a-b5cb328d0d76-kube-api-access-jkstk\") pod \"kube-proxy-qt9ld\" (UID: \"4e36061f-0559-4cde-9b0a-b5cb328d0d76\") "
	Aug 13 21:12:49 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:49.765408    2380 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e36061f-0559-4cde-9b0a-b5cb328d0d76-kube-proxy\") pod \"kube-proxy-qt9ld\" (UID: \"4e36061f-0559-4cde-9b0a-b5cb328d0d76\") "
	Aug 13 21:12:49 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:49.765615    2380 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e36061f-0559-4cde-9b0a-b5cb328d0d76-lib-modules\") pod \"kube-proxy-qt9ld\" (UID: \"4e36061f-0559-4cde-9b0a-b5cb328d0d76\") "
	Aug 13 21:12:49 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:49.766048    2380 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd5tc\" (UniqueName: \"kubernetes.io/projected/5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6-kube-api-access-pd5tc\") pod \"storage-provisioner\" (UID: \"5367404c-0e33-4f6c-9bb7-8fdb4ebbe4f6\") "
	Aug 13 21:12:49 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:49.770745    2380 reconciler.go:157] "Reconciler: start to sync state"
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:49.987166    2380 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xhrj\" (UniqueName: \"kubernetes.io/projected/ffc12ff0-fe4e-422b-ae81-83f17416e379-kube-api-access-8xhrj\") pod \"ffc12ff0-fe4e-422b-ae81-83f17416e379\" (UID: \"ffc12ff0-fe4e-422b-ae81-83f17416e379\") "
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:49.987292    2380 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffc12ff0-fe4e-422b-ae81-83f17416e379-config-volume\") pod \"ffc12ff0-fe4e-422b-ae81-83f17416e379\" (UID: \"ffc12ff0-fe4e-422b-ae81-83f17416e379\") "
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: W0813 21:12:49.997946    2380 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/ffc12ff0-fe4e-422b-ae81-83f17416e379/volumes/kubernetes.io~projected/kube-api-access-8xhrj: clearQuota called, but quotas disabled
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:50.000758    2380 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc12ff0-fe4e-422b-ae81-83f17416e379-kube-api-access-8xhrj" (OuterVolumeSpecName: "kube-api-access-8xhrj") pod "ffc12ff0-fe4e-422b-ae81-83f17416e379" (UID: "ffc12ff0-fe4e-422b-ae81-83f17416e379"). InnerVolumeSpecName "kube-api-access-8xhrj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: W0813 21:12:50.004339    2380 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/ffc12ff0-fe4e-422b-ae81-83f17416e379/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:50.011180    2380 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffc12ff0-fe4e-422b-ae81-83f17416e379-config-volume" (OuterVolumeSpecName: "config-volume") pod "ffc12ff0-fe4e-422b-ae81-83f17416e379" (UID: "ffc12ff0-fe4e-422b-ae81-83f17416e379"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:50.088906    2380 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffc12ff0-fe4e-422b-ae81-83f17416e379-config-volume\") on node \"newest-cni-20210813210910-30853\" DevicePath \"\""
	Aug 13 21:12:50 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:50.089020    2380 reconciler.go:319] "Volume detached for volume \"kube-api-access-8xhrj\" (UniqueName: \"kubernetes.io/projected/ffc12ff0-fe4e-422b-ae81-83f17416e379-kube-api-access-8xhrj\") on node \"newest-cni-20210813210910-30853\" DevicePath \"\""
	Aug 13 21:12:51 newest-cni-20210813210910-30853 kubelet[2380]: W0813 21:12:51.301805    2380 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e36061f_0559_4cde_9b0a_b5cb328d0d76.slice/crio-7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4.scope": /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e36061f_0559_4cde_9b0a_b5cb328d0d76.slice/crio-7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4.scope/cpuset.cpus found to be empty, continuing to push stats
	Aug 13 21:12:52 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:52.899090    2380 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ffc12ff0-fe4e-422b-ae81-83f17416e379 path="/var/lib/kubelet/pods/ffc12ff0-fe4e-422b-ae81-83f17416e379/volumes"
	Aug 13 21:12:53 newest-cni-20210813210910-30853 kubelet[2380]: E0813 21:12:53.160076    2380 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:12:53 newest-cni-20210813210910-30853 kubelet[2380]: E0813 21:12:53.160115    2380 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:12:53 newest-cni-20210813210910-30853 kubelet[2380]: E0813 21:12:53.165745    2380 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ns9c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-mrklk_kube-system(ad347f93-2bcc-4e1c-b82c-66f4854c46d2): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:12:53 newest-cni-20210813210910-30853 kubelet[2380]: E0813 21:12:53.165812    2380 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-mrklk" podUID=ad347f93-2bcc-4e1c-b82c-66f4854c46d2
	Aug 13 21:12:53 newest-cni-20210813210910-30853 kubelet[2380]: E0813 21:12:53.840167    2380 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-mrklk" podUID=ad347f93-2bcc-4e1c-b82c-66f4854c46d2
	Aug 13 21:12:54 newest-cni-20210813210910-30853 kubelet[2380]: E0813 21:12:54.903723    2380 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e36061f_0559_4cde_9b0a_b5cb328d0d76.slice/crio-7d4ecadfd7f192bb359463a9fc171f251bc71bd132e4cd005369f16335265ff4.scope\": RecentStats: unable to find data in memory cache]"
	Aug 13 21:12:56 newest-cni-20210813210910-30853 kubelet[2380]: I0813 21:12:56.854241    2380 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 13 21:12:56 newest-cni-20210813210910-30853 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:12:56 newest-cni-20210813210910-30853 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:12:56 newest-cni-20210813210910-30853 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [21ee344d5f9ea8e4761b94b56fcdbf6c01d39e4f445b3a12bda8483d3204a1fe] <==
	* I0813 21:12:54.045115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 21:14:20.960727   14770 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (85.03s)

                                                
                                    

Test pass (228/269)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 9.13
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.06
10 TestDownloadOnly/v1.21.3/json-events 7.13
11 TestDownloadOnly/v1.21.3/preload-exists 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.06
17 TestDownloadOnly/v1.22.0-rc.0/json-events 6.42
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.06
23 TestDownloadOnly/DeleteAll 0.23
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
26 TestOffline 176.85
29 TestAddons/parallel/Registry 21.76
31 TestAddons/parallel/MetricsServer 6.12
32 TestAddons/parallel/HelmTiller 42.19
33 TestAddons/parallel/Olm 76.14
34 TestAddons/parallel/CSI 91.89
35 TestAddons/parallel/GCPAuth 77.44
36 TestCertOptions 68.31
38 TestForceSystemdFlag 61.37
39 TestForceSystemdEnv 63.52
40 TestKVMDriverInstallOrUpdate 2.25
44 TestErrorSpam/setup 54.31
45 TestErrorSpam/start 0.42
46 TestErrorSpam/status 0.77
47 TestErrorSpam/pause 4.93
48 TestErrorSpam/unpause 1.77
49 TestErrorSpam/stop 6.25
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 107.88
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 6.48
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.26
60 TestFunctional/serial/CacheCmd/cache/add_remote 4.55
61 TestFunctional/serial/CacheCmd/cache/add_local 2.56
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
63 TestFunctional/serial/CacheCmd/cache/list 0.05
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.22
66 TestFunctional/serial/CacheCmd/cache/delete 0.11
67 TestFunctional/serial/MinikubeKubectlCmd 0.12
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
69 TestFunctional/serial/ExtraConfig 38.18
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 1.48
72 TestFunctional/serial/LogsFileCmd 1.41
74 TestFunctional/parallel/ConfigCmd 0.38
75 TestFunctional/parallel/DashboardCmd 7.46
76 TestFunctional/parallel/DryRun 0.32
77 TestFunctional/parallel/InternationalLanguage 0.16
78 TestFunctional/parallel/StatusCmd 0.77
81 TestFunctional/parallel/ServiceCmd 35.25
82 TestFunctional/parallel/AddonsCmd 0.17
83 TestFunctional/parallel/PersistentVolumeClaim 67.94
85 TestFunctional/parallel/SSHCmd 0.46
86 TestFunctional/parallel/CpCmd 0.51
87 TestFunctional/parallel/MySQL 34.13
88 TestFunctional/parallel/FileSync 0.22
89 TestFunctional/parallel/CertSync 1.35
93 TestFunctional/parallel/NodeLabels 0.08
94 TestFunctional/parallel/LoadImage 2.52
95 TestFunctional/parallel/RemoveImage 3.55
96 TestFunctional/parallel/LoadImageFromFile 2.58
97 TestFunctional/parallel/BuildImage 5.93
98 TestFunctional/parallel/ListImages 0.39
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
101 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
102 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
103 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
108 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
112 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
114 TestFunctional/parallel/ProfileCmd/profile_list 0.3
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
116 TestFunctional/parallel/MountCmd/any-port 13.39
117 TestFunctional/parallel/Version/short 0.05
118 TestFunctional/parallel/Version/components 0.69
119 TestFunctional/parallel/MountCmd/specific-port 1.8
120 TestFunctional/delete_busybox_image 0.08
121 TestFunctional/delete_my-image_image 0.04
122 TestFunctional/delete_minikube_cached_images 0.04
126 TestJSONOutput/start/Audit 0
128 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
129 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
131 TestJSONOutput/pause/Audit 0
133 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/unpause/Audit 0
138 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/stop/Audit 0
143 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
145 TestErrorJSONOutput 0.32
148 TestMainNoArgs 0.05
151 TestMultiNode/serial/FreshStart2Nodes 135.29
154 TestMultiNode/serial/AddNode 55.06
155 TestMultiNode/serial/ProfileList 0.24
156 TestMultiNode/serial/CopyFile 1.81
157 TestMultiNode/serial/StopNode 2.93
158 TestMultiNode/serial/StartAfterStop 48.97
159 TestMultiNode/serial/RestartKeepsNodes 181.9
160 TestMultiNode/serial/DeleteNode 1.89
161 TestMultiNode/serial/StopMultiNode 4.4
162 TestMultiNode/serial/RestartMultiNode 152.12
163 TestMultiNode/serial/ValidateNameConflict 57.2
169 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
170 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 11.28
172 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
173 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 10.26
175 TestDebPackageInstall/install_amd64_debian:10/minikube 0
176 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 9.64
178 TestDebPackageInstall/install_amd64_debian:9/minikube 0
179 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 8.28
181 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
182 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 17
184 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
185 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 16.23
187 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
188 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 16.53
190 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
191 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 15.12
194 TestScheduledStopUnix 94.05
198 TestRunningBinaryUpgrade 228.21
203 TestPause/serial/Start 186.51
211 TestNetworkPlugins/group/false 0.76
215 TestPause/serial/SecondStartNoReconfiguration 6.63
218 TestPause/serial/Unpause 0.89
220 TestPause/serial/DeletePaused 1
221 TestPause/serial/VerifyDeletedResources 16.2
229 TestNetworkPlugins/group/auto/Start 132.29
230 TestNetworkPlugins/group/kindnet/Start 112.42
231 TestNetworkPlugins/group/auto/KubeletFlags 0.24
232 TestNetworkPlugins/group/auto/NetCatPod 12.57
233 TestNetworkPlugins/group/auto/DNS 0.24
234 TestNetworkPlugins/group/auto/Localhost 0.18
235 TestNetworkPlugins/group/auto/HairPin 0.2
236 TestNetworkPlugins/group/cilium/Start 151.83
237 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
238 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
239 TestNetworkPlugins/group/kindnet/NetCatPod 17.68
240 TestNetworkPlugins/group/kindnet/DNS 0.28
241 TestNetworkPlugins/group/kindnet/Localhost 0.22
242 TestNetworkPlugins/group/kindnet/HairPin 0.3
243 TestNetworkPlugins/group/calico/Start 127.6
244 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
245 TestNetworkPlugins/group/custom-weave/Start 118.99
246 TestNetworkPlugins/group/cilium/ControllerPod 5.03
247 TestNetworkPlugins/group/cilium/KubeletFlags 0.26
248 TestNetworkPlugins/group/cilium/NetCatPod 13.66
249 TestNetworkPlugins/group/cilium/DNS 0.4
250 TestNetworkPlugins/group/cilium/Localhost 0.32
251 TestNetworkPlugins/group/cilium/HairPin 0.24
252 TestNetworkPlugins/group/enable-default-cni/Start 113.59
253 TestNetworkPlugins/group/calico/ControllerPod 5.05
254 TestNetworkPlugins/group/calico/KubeletFlags 0.22
255 TestNetworkPlugins/group/calico/NetCatPod 13.76
256 TestNetworkPlugins/group/calico/DNS 0.41
257 TestNetworkPlugins/group/calico/Localhost 0.23
258 TestNetworkPlugins/group/calico/HairPin 0.27
259 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.25
260 TestNetworkPlugins/group/custom-weave/NetCatPod 17.64
261 TestNetworkPlugins/group/flannel/Start 124.05
262 TestNetworkPlugins/group/bridge/Start 117.09
263 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
264 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.72
265 TestNetworkPlugins/group/enable-default-cni/DNS 0.3
266 TestNetworkPlugins/group/enable-default-cni/Localhost 0.27
267 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
269 TestStartStop/group/old-k8s-version/serial/FirstStart 140.93
270 TestNetworkPlugins/group/flannel/ControllerPod 7.27
271 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
272 TestNetworkPlugins/group/bridge/NetCatPod 12.6
273 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
274 TestNetworkPlugins/group/flannel/NetCatPod 11.64
275 TestNetworkPlugins/group/bridge/DNS 0.26
276 TestNetworkPlugins/group/bridge/Localhost 0.2
277 TestNetworkPlugins/group/bridge/HairPin 0.24
278 TestNetworkPlugins/group/flannel/DNS 0.28
279 TestNetworkPlugins/group/flannel/Localhost 0.24
280 TestNetworkPlugins/group/flannel/HairPin 0.28
282 TestStartStop/group/no-preload/serial/FirstStart 179.94
284 TestStartStop/group/embed-certs/serial/FirstStart 108.47
285 TestStartStop/group/old-k8s-version/serial/DeployApp 11.69
286 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1
287 TestStartStop/group/old-k8s-version/serial/Stop 3.11
288 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
289 TestStartStop/group/old-k8s-version/serial/SecondStart 472.01
291 TestStartStop/group/default-k8s-different-port/serial/FirstStart 132.85
292 TestStartStop/group/embed-certs/serial/DeployApp 12.71
293 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.15
294 TestStartStop/group/embed-certs/serial/Stop 4.12
295 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
296 TestStartStop/group/embed-certs/serial/SecondStart 428.95
297 TestStartStop/group/no-preload/serial/DeployApp 11.74
298 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.5
299 TestStartStop/group/no-preload/serial/Stop 63.46
300 TestStartStop/group/default-k8s-different-port/serial/DeployApp 10.62
301 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 1.26
302 TestStartStop/group/default-k8s-different-port/serial/Stop 3.11
303 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.16
304 TestStartStop/group/default-k8s-different-port/serial/SecondStart 415.26
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
306 TestStartStop/group/no-preload/serial/SecondStart 447.09
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
313 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
316 TestStartStop/group/newest-cni/serial/FirstStart 87.54
317 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.03
318 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.12
319 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.29
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
323 TestStartStop/group/newest-cni/serial/Stop 63.41
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.02
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
326 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
328 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
329 TestStartStop/group/newest-cni/serial/SecondStart 73.29
330 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
x
+
TestDownloadOnly/v1.14.0/json-events (9.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200748-30853 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200748-30853 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.128261579s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (9.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210813200748-30853
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210813200748-30853: exit status 85 (63.31475ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:07:48
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:07:48.097607   30865 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:07:48.097682   30865 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:07:48.097711   30865 out.go:311] Setting ErrFile to fd 2...
	I0813 20:07:48.097715   30865 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:07:48.097809   30865 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	W0813 20:07:48.097911   30865 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: no such file or directory
	I0813 20:07:48.098125   30865 out.go:305] Setting JSON to true
	I0813 20:07:48.132169   30865 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":6630,"bootTime":1628878638,"procs":143,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:07:48.132298   30865 start.go:121] virtualization: kvm guest
	I0813 20:07:48.135584   30865 notify.go:169] Checking for updates...
	I0813 20:07:48.137604   30865 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:07:48.165424   30865 start.go:278] selected driver: kvm2
	I0813 20:07:48.165439   30865 start.go:751] validating driver "kvm2" against <nil>
	I0813 20:07:48.166268   30865 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:07:48.166458   30865 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:07:48.177716   30865 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:07:48.177760   30865 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:07:48.178302   30865 start_flags.go:344] Using suggested 6000MB memory alloc based on sys=32179MB, container=0MB
	I0813 20:07:48.178407   30865 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 20:07:48.178453   30865 cni.go:93] Creating CNI manager for ""
	I0813 20:07:48.178460   30865 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:07:48.178469   30865 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 20:07:48.178478   30865 start_flags.go:277] config:
	{Name:download-only-20210813200748-30853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210813200748-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:07:48.178664   30865 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:07:48.180544   30865 download.go:92] Downloading: https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 20:07:50.514875   30865 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0813 20:07:50.584784   30865 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:07:50.584820   30865 cache.go:56] Caching tarball of preloaded images
	I0813 20:07:50.584989   30865 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0813 20:07:50.586945   30865 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0813 20:07:50.843107   30865 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:70b8731eaaa1b4de2d1cd60021fc1260 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813200748-30853"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (7.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200748-30853 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200748-30853 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.134404715s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (7.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210813200748-30853
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210813200748-30853: exit status 85 (62.606236ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:07:57
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:07:57.288482   30901 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:07:57.288546   30901 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:07:57.288549   30901 out.go:311] Setting ErrFile to fd 2...
	I0813 20:07:57.288552   30901 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:07:57.288647   30901 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	W0813 20:07:57.288747   30901 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: no such file or directory
	I0813 20:07:57.288852   30901 out.go:305] Setting JSON to true
	I0813 20:07:57.323290   30901 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":6639,"bootTime":1628878638,"procs":143,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:07:57.323443   30901 start.go:121] virtualization: kvm guest
	I0813 20:07:57.326889   30901 notify.go:169] Checking for updates...
	I0813 20:07:57.329145   30901 config.go:177] Loaded profile config "download-only-20210813200748-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	W0813 20:07:57.329229   30901 start.go:659] api.Load failed for download-only-20210813200748-30853: filestore "download-only-20210813200748-30853": Docker machine "download-only-20210813200748-30853" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:07:57.329269   30901 driver.go:335] Setting default libvirt URI to qemu:///system
	W0813 20:07:57.329297   30901 start.go:659] api.Load failed for download-only-20210813200748-30853: filestore "download-only-20210813200748-30853": Docker machine "download-only-20210813200748-30853" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:07:57.357484   30901 start.go:278] selected driver: kvm2
	I0813 20:07:57.357496   30901 start.go:751] validating driver "kvm2" against &{Name:download-only-20210813200748-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.14.0 ClusterName:download-only-20210813200748-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:07:57.358176   30901 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:07:57.358358   30901 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:07:57.368973   30901 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:07:57.369639   30901 cni.go:93] Creating CNI manager for ""
	I0813 20:07:57.369652   30901 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:07:57.369660   30901 start_flags.go:277] config:
	{Name:download-only-20210813200748-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210813200748-30853 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:07:57.369743   30901 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:07:57.371439   30901 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:07:57.440964   30901 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:07:57.440991   30901 cache.go:56] Caching tarball of preloaded images
	I0813 20:07:57.441148   30901 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:07:57.442937   30901 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	I0813 20:07:57.509624   30901 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:5b844d0f443dc130a4f324a367701516 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813200748-30853"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (6.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200748-30853 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200748-30853 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.422785445s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (6.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210813200748-30853
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210813200748-30853: exit status 85 (63.670436ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:08:04
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:08:04.487646   30937 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:08:04.487753   30937 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:04.487767   30937 out.go:311] Setting ErrFile to fd 2...
	I0813 20:08:04.487771   30937 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:04.487867   30937 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	W0813 20:08:04.487968   30937 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: no such file or directory
	I0813 20:08:04.488061   30937 out.go:305] Setting JSON to true
	I0813 20:08:04.521975   30937 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":6646,"bootTime":1628878638,"procs":143,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:08:04.522099   30937 start.go:121] virtualization: kvm guest
	I0813 20:08:04.524583   30937 notify.go:169] Checking for updates...
	I0813 20:08:04.526442   30937 config.go:177] Loaded profile config "download-only-20210813200748-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	W0813 20:08:04.526486   30937 start.go:659] api.Load failed for download-only-20210813200748-30853: filestore "download-only-20210813200748-30853": Docker machine "download-only-20210813200748-30853" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:08:04.526523   30937 driver.go:335] Setting default libvirt URI to qemu:///system
	W0813 20:08:04.526552   30937 start.go:659] api.Load failed for download-only-20210813200748-30853: filestore "download-only-20210813200748-30853": Docker machine "download-only-20210813200748-30853" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:08:04.554659   30937 start.go:278] selected driver: kvm2
	I0813 20:08:04.554675   30937 start.go:751] validating driver "kvm2" against &{Name:download-only-20210813200748-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.21.3 ClusterName:download-only-20210813200748-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:04.555502   30937 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:08:04.555652   30937 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:08:04.566099   30937 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:08:04.566751   30937 cni.go:93] Creating CNI manager for ""
	I0813 20:08:04.566762   30937 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 20:08:04.566769   30937 start_flags.go:277] config:
	{Name:download-only-20210813200748-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210813200748-30853 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:04.566866   30937 iso.go:123] acquiring lock: {Name:mk6ca645530c829d996d6117a97e4f8a542f7163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:08:04.568361   30937 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:08:04.626798   30937 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:08:04.626822   30937 cache.go:56] Caching tarball of preloaded images
	I0813 20:08:04.627012   30937 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:08:04.628853   30937 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0813 20:08:04.691487   30937 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:c7902b63f7bbc786f5f337da25a17477 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:08:09.073653   30937 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0813 20:08:09.073784   30937 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813200748-30853"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210813200748-30853
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestOffline (176.85s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-20210813204600-30853 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-20210813204600-30853 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m55.676876739s)
helpers_test.go:176: Cleaning up "offline-crio-20210813204600-30853" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-20210813204600-30853
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-20210813204600-30853: (1.175417256s)
--- PASS: TestOffline (176.85s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 22.288413ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-h6s98" [1829875c-4f3b-483e-8582-350974b1fece] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014185711s
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-proxy-h8lsg" [acb087eb-33aa-47b9-8ccd-ecea64c4ae2a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010631016s
addons_test.go:294: (dbg) Run:  kubectl --context addons-20210813200811-30853 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Run:  kubectl --context addons-20210813200811-30853 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Done: kubectl --context addons-20210813200811-30853 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (10.244124547s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 ip

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 addons disable registry --alsologtostderr -v=1
addons_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200811-30853 addons disable registry --alsologtostderr -v=1: (1.082034371s)
--- PASS: TestAddons/parallel/Registry (21.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.12s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: metrics-server stabilized in 2.345845ms
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:343: "metrics-server-77c99ccb96-gnqsc" [c3871703-1162-4d76-bf75-ce2c9fa75212] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01199496s
addons_test.go:369: (dbg) Run:  kubectl --context addons-20210813200811-30853 top pods -n kube-system
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:386: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200811-30853 addons disable metrics-server --alsologtostderr -v=1: (1.001624249s)
--- PASS: TestAddons/parallel/MetricsServer (6.12s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (42.19s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: tiller-deploy stabilized in 3.686761ms
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:343: "tiller-deploy-768d69497-bmgs8" [b27137ea-ef0f-44d5-9fd1-42ec9aa91f1e] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.014410396s
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210813200811-30853 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210813200811-30853 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (14.96175882s)
addons_test.go:432: kubectl --context addons-20210813200811-30853 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210813200811-30853 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210813200811-30853 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (20.904727542s)
addons_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:444: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200811-30853 addons disable helm-tiller --alsologtostderr -v=1: (1.032554022s)
--- PASS: TestAddons/parallel/HelmTiller (42.19s)

                                                
                                    
x
+
TestAddons/parallel/Olm (76.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: catalog-operator stabilized in 19.596209ms
addons_test.go:467: olm-operator stabilized in 21.787832ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:471: packageserver stabilized in 25.940864ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "catalog-operator-75d496484d-sbs5r" [70f59bff-f3c5-42c3-9910-5716403e87f0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.014473967s
addons_test.go:476: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "olm-operator-859c88c96-mwtwb" [24e09dae-f72b-40a4-80cb-f5ecbbf0ca7f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.010854369s
addons_test.go:479: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...
helpers_test.go:343: "packageserver-67fc94bc46-j8jrl" [ee06352d-ffa1-463f-b2e6-232ca6dbe2dd] Running
helpers_test.go:343: "packageserver-67fc94bc46-r9m4b" [6672f414-2c72-45f1-be32-f615183e971b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-67fc94bc46-j8jrl" [ee06352d-ffa1-463f-b2e6-232ca6dbe2dd] Running
helpers_test.go:343: "packageserver-67fc94bc46-r9m4b" [6672f414-2c72-45f1-be32-f615183e971b] Running
helpers_test.go:343: "packageserver-67fc94bc46-j8jrl" [ee06352d-ffa1-463f-b2e6-232ca6dbe2dd] Running
helpers_test.go:343: "packageserver-67fc94bc46-r9m4b" [6672f414-2c72-45f1-be32-f615183e971b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-67fc94bc46-j8jrl" [ee06352d-ffa1-463f-b2e6-232ca6dbe2dd] Running
helpers_test.go:343: "packageserver-67fc94bc46-r9m4b" [6672f414-2c72-45f1-be32-f615183e971b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-67fc94bc46-j8jrl" [ee06352d-ffa1-463f-b2e6-232ca6dbe2dd] Running
helpers_test.go:343: "packageserver-67fc94bc46-r9m4b" [6672f414-2c72-45f1-be32-f615183e971b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-67fc94bc46-j8jrl" [ee06352d-ffa1-463f-b2e6-232ca6dbe2dd] Running
addons_test.go:479: (dbg) TestAddons/parallel/Olm: app=packageserver healthy within 5.016355916s
addons_test.go:482: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "olm.catalogSource=operatorhubio-catalog" in namespace "olm" ...
helpers_test.go:343: "operatorhubio-catalog-cqbtm" [f61fde8c-0bdf-4e93-a96f-2181f2d62fc3] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:482: (dbg) TestAddons/parallel/Olm: olm.catalogSource=operatorhubio-catalog healthy within 5.010175918s
addons_test.go:487: (dbg) Run:  kubectl --context addons-20210813200811-30853 create -f testdata/etcd.yaml
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200811-30853 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:499: kubectl --context addons-20210813200811-30853 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
2021/08/13 20:12:14 [DEBUG] GET http://192.168.39.144:5000

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200811-30853 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200811-30853 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200811-30853 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200811-30853 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200811-30853 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200811-30853 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200811-30853 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200811-30853 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200811-30853 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200811-30853 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (76.14s)

                                                
                                    
x
+
TestAddons/parallel/CSI (91.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 22.762113ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210813200811-30853 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210813200811-30853 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210813200811-30853 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210813200811-30853 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [3f923d7f-c597-488d-8bee-872ae8a6031f] Pending
helpers_test.go:343: "task-pv-pod" [3f923d7f-c597-488d-8bee-872ae8a6031f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [3f923d7f-c597-488d-8bee-872ae8a6031f] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 22.039645779s
addons_test.go:549: (dbg) Run:  kubectl --context addons-20210813200811-30853 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210813200811-30853 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:426: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210813200811-30853 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-20210813200811-30853 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Done: kubectl --context addons-20210813200811-30853 delete pod task-pv-pod: (17.912953246s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-20210813200811-30853 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-20210813200811-30853 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210813200811-30853 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210813200811-30853 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-20210813200811-30853 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [9b9b5425-2e89-4b5e-b4e7-fdc5036d9ff8] Pending
helpers_test.go:343: "task-pv-pod-restore" [9b9b5425-2e89-4b5e-b4e7-fdc5036d9ff8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [9b9b5425-2e89-4b5e-b4e7-fdc5036d9ff8] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 32.00921291s
addons_test.go:591: (dbg) Run:  kubectl --context addons-20210813200811-30853 delete pod task-pv-pod-restore

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:591: (dbg) Done: kubectl --context addons-20210813200811-30853 delete pod task-pv-pod-restore: (6.225801486s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-20210813200811-30853 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-20210813200811-30853 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200811-30853 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.429382622s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200811-30853 addons disable volumesnapshots --alsologtostderr -v=1: (1.069580257s)
--- PASS: TestAddons/parallel/CSI (91.89s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (77.44s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:618: (dbg) Run:  kubectl --context addons-20210813200811-30853 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [b1b84144-d4f0-4c62-ac7b-9ed065e6f23c] Pending
helpers_test.go:343: "busybox" [b1b84144-d4f0-4c62-ac7b-9ed065e6f23c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [b1b84144-d4f0-4c62-ac7b-9ed065e6f23c] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 12.025809983s
addons_test.go:630: (dbg) Run:  kubectl --context addons-20210813200811-30853 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:667: (dbg) Run:  kubectl --context addons-20210813200811-30853 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:683: (dbg) Run:  kubectl --context addons-20210813200811-30853 apply -f testdata/private-image.yaml
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-fzcpx" [d9807aa4-fbf0-4882-8430-19ab67fd1b8c] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-fzcpx" [d9807aa4-fbf0-4882-8430-19ab67fd1b8c] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 33.014850324s
addons_test.go:696: (dbg) Run:  kubectl --context addons-20210813200811-30853 apply -f testdata/private-image-eu.yaml
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-5956d58f9f-xjxw9" [d4598781-17d6-40dd-8760-b02c4e9c31f7] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-xjxw9" [d4598781-17d6-40dd-8760-b02c4e9c31f7] Running
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image-eu healthy within 18.017274627s
addons_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200811-30853 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:709: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200811-30853 addons disable gcp-auth --alsologtostderr -v=1: (12.818347809s)
--- PASS: TestAddons/parallel/GCPAuth (77.44s)

                                                
                                    
x
+
TestCertOptions (68.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210813205051-30853 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210813205051-30853 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m6.938294202s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210813205051-30853 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210813205051-30853 config view
helpers_test.go:176: Cleaning up "cert-options-20210813205051-30853" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210813205051-30853
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210813205051-30853: (1.084741006s)
--- PASS: TestCertOptions (68.31s)

                                                
                                    
x
+
TestForceSystemdFlag (61.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210813204950-30853 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210813204950-30853 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m0.5296694s)
helpers_test.go:176: Cleaning up "force-systemd-flag-20210813204950-30853" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210813204950-30853
--- PASS: TestForceSystemdFlag (61.37s)

                                                
                                    
x
+
TestForceSystemdEnv (63.52s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210813204600-30853 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210813204600-30853 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m2.457878914s)
helpers_test.go:176: Cleaning up "force-systemd-env-20210813204600-30853" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210813204600-30853
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210813204600-30853: (1.059515038s)
--- PASS: TestForceSystemdEnv (63.52s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.25s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.25s)

                                                
                                    
x
+
TestErrorSpam/setup (54.31s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210813201711-30853 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210813201711-30853 --driver=kvm2  --container-runtime=crio
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20210813201711-30853 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210813201711-30853 --driver=kvm2  --container-runtime=crio: (54.312062882s)
--- PASS: TestErrorSpam/setup (54.31s)

                                                
                                    
x
+
TestErrorSpam/start (0.42s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 start --dry-run
--- PASS: TestErrorSpam/start (0.42s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (4.93s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 pause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 pause: exit status 80 (2.360210417s)

                                                
                                                
-- stdout --
	* Pausing node nospam-20210813201711-30853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: runc: sudo runc pause 33b953ba7f5715ccc6866599771b15552e03fe67610e5a5ebe5cac2a9c13d391 4324f5ce2adff6810e19bd181300cb46e03870b844aaff2c06838dffcea3f9a0: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:18:09Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭───────────────────────────────────────────────────────────────────────────────╮
	│                                                                               │
	│    * If the above advice does not help, please let us know:                   │
	│      https://github.com/kubernetes/minikube/issues/new/choose                 │
	│                                                                               │
	│    * Please attach the following file to the GitHub issue:                    │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                               │
	╰───────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 pause" failed: exit status 80
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 pause: (2.061736774s)
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 pause
--- PASS: TestErrorSpam/pause (4.93s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 unpause
--- PASS: TestErrorSpam/unpause (1.77s)

                                                
                                    
x
+
TestErrorSpam/stop (6.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 stop: (6.100977346s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201711-30853 --log_dir /tmp/nospam-20210813201711-30853 stop
--- PASS: TestErrorSpam/stop (6.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/test/nested/copy/30853/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (107.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201821-30853 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:1982: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813201821-30853 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m47.879405539s)
--- PASS: TestFunctional/serial/StartWithProxy (107.88s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.48s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201821-30853 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813201821-30853 --alsologtostderr -v=8: (6.482136464s)
functional_test.go:631: soft start took 6.482725896s for "functional-20210813201821-30853" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.48s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210813201821-30853 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201821-30853 cache add k8s.gcr.io/pause:3.3: (1.747165734s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 cache add k8s.gcr.io/pause:latest
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201821-30853 cache add k8s.gcr.io/pause:latest: (1.923637167s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210813201821-30853 /tmp/functional-20210813201821-30853834139385
functional_test.go:1024: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 cache add minikube-local-cache-test:functional-20210813201821-30853
functional_test.go:1024: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201821-30853 cache add minikube-local-cache-test:functional-20210813201821-30853: (2.280386607s)
functional_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 cache delete minikube-local-cache-test:functional-20210813201821-30853
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210813201821-30853
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (229.667768ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 cache reload
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201821-30853 cache reload: (1.460661756s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 kubectl -- --context functional-20210813201821-30853 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210813201821-30853 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201821-30853 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:715: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813201821-30853 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.183701945s)
functional_test.go:719: restart took 38.183799684s for "functional-20210813201821-30853" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210813201821-30853 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 logs
functional_test.go:1165: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201821-30853 logs: (1.478437381s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 logs --file /tmp/functional-20210813201821-30853119576068/logs.txt
functional_test.go:1181: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201821-30853 logs --file /tmp/functional-20210813201821-30853119576068/logs.txt: (1.413937029s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 config unset cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201821-30853 config get cpus: exit status 14 (63.437164ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 config set cpus 2
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 config get cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201821-30853 config get cpus: exit status 14 (62.686051ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210813201821-30853 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:862: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210813201821-30853 --alsologtostderr -v=1] ...

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:507: unable to kill pid 3470: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201821-30853 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210813201821-30853 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (157.327452ms)

                                                
                                                
-- stdout --
	* [functional-20210813201821-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:21:42.683760    3355 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:21:42.683843    3355 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:21:42.683847    3355 out.go:311] Setting ErrFile to fd 2...
	I0813 20:21:42.683850    3355 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:21:42.683940    3355 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:21:42.684145    3355 out.go:305] Setting JSON to false
	I0813 20:21:42.717901    3355 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":7465,"bootTime":1628878638,"procs":182,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:21:42.717972    3355 start.go:121] virtualization: kvm guest
	I0813 20:21:42.720241    3355 out.go:177] * [functional-20210813201821-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:21:42.721722    3355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:21:42.723171    3355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:21:42.724676    3355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:21:42.725971    3355 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:21:42.726381    3355 config.go:177] Loaded profile config "functional-20210813201821-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:21:42.726770    3355 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:21:42.726818    3355 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:21:42.737417    3355 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41865
	I0813 20:21:42.737823    3355 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:21:42.738349    3355 main.go:130] libmachine: Using API Version  1
	I0813 20:21:42.738373    3355 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:21:42.738695    3355 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:21:42.738871    3355 main.go:130] libmachine: (functional-20210813201821-30853) Calling .DriverName
	I0813 20:21:42.739050    3355 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:21:42.739394    3355 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:21:42.739432    3355 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:21:42.750930    3355 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37765
	I0813 20:21:42.751301    3355 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:21:42.751702    3355 main.go:130] libmachine: Using API Version  1
	I0813 20:21:42.751722    3355 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:21:42.752039    3355 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:21:42.752226    3355 main.go:130] libmachine: (functional-20210813201821-30853) Calling .DriverName
	I0813 20:21:42.782461    3355 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 20:21:42.782492    3355 start.go:278] selected driver: kvm2
	I0813 20:21:42.782505    3355 start.go:751] validating driver "kvm2" against &{Name:functional-20210813201821-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.21.3 ClusterName:functional-20210813201821-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.168 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:
false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:21:42.782677    3355 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:21:42.785238    3355 out.go:177] 
	W0813 20:21:42.785368    3355 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0813 20:21:42.786833    3355 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201821-30853 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201821-30853 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210813201821-30853 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (163.320945ms)

                                                
                                                
-- stdout --
	* [functional-20210813201821-30853] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:21:42.520016    3302 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:21:42.520221    3302 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:21:42.520232    3302 out.go:311] Setting ErrFile to fd 2...
	I0813 20:21:42.520236    3302 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:21:42.520431    3302 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:21:42.520681    3302 out.go:305] Setting JSON to false
	I0813 20:21:42.562276    3302 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":7464,"bootTime":1628878638,"procs":177,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:21:42.562390    3302 start.go:121] virtualization: kvm guest
	I0813 20:21:42.564274    3302 out.go:177] * [functional-20210813201821-30853] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	I0813 20:21:42.565624    3302 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:21:42.566905    3302 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:21:42.568255    3302 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:21:42.569780    3302 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:21:42.570209    3302 config.go:177] Loaded profile config "functional-20210813201821-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:21:42.570599    3302 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:21:42.570663    3302 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:21:42.581406    3302 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45699
	I0813 20:21:42.581820    3302 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:21:42.582395    3302 main.go:130] libmachine: Using API Version  1
	I0813 20:21:42.582421    3302 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:21:42.582776    3302 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:21:42.582984    3302 main.go:130] libmachine: (functional-20210813201821-30853) Calling .DriverName
	I0813 20:21:42.583186    3302 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:21:42.583632    3302 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:21:42.583673    3302 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:21:42.594520    3302 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42665
	I0813 20:21:42.595022    3302 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:21:42.595547    3302 main.go:130] libmachine: Using API Version  1
	I0813 20:21:42.595574    3302 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:21:42.595905    3302 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:21:42.596081    3302 main.go:130] libmachine: (functional-20210813201821-30853) Calling .DriverName
	I0813 20:21:42.624996    3302 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0813 20:21:42.625032    3302 start.go:278] selected driver: kvm2
	I0813 20:21:42.625038    3302 start.go:751] validating driver "kvm2" against &{Name:functional-20210813201821-30853 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.21.3 ClusterName:functional-20210813201821-30853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.168 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:
false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:21:42.625152    3302 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:21:42.627412    3302 out.go:177] 
	W0813 20:21:42.627563    3302 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0813 20:21:42.629001    3302 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:815: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:826: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (35.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20210813201821-30853 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210813201821-30853 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6cbfcd7cbc-lqbwr" [86336c0d-5857-4c31-8f57-949e5a836006] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-lqbwr" [86336c0d-5857-4c31-8f57-949e5a836006] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 33.020136935s
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 service list
functional_test.go:1372: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201821-30853 service list: (1.273640792s)
functional_test.go:1385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 service --namespace=default --https --url hello-node
functional_test.go:1394: found endpoint: https://192.168.39.168:32407
functional_test.go:1405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 service hello-node --url --format={{.IP}}
functional_test.go:1414: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 service hello-node --url
functional_test.go:1420: found endpoint for hello-node: http://192.168.39.168:32407
functional_test.go:1431: Attempting to fetch http://192.168.39.168:32407 ...
functional_test.go:1450: http://192.168.39.168:32407: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-lqbwr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.168:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.168:32407
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (35.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (67.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [b982e1e5-7362-4aa5-ab67-686d8d9ab270] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009469978s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210813201821-30853 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210813201821-30853 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210813201821-30853 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210813201821-30853 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210813201821-30853 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [4ac4cd50-0e68-4cc3-8bf1-c0d46946e4a6] Pending
helpers_test.go:343: "sp-pod" [4ac4cd50-0e68-4cc3-8bf1-c0d46946e4a6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [4ac4cd50-0e68-4cc3-8bf1-c0d46946e4a6] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 39.015583693s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210813201821-30853 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210813201821-30853 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210813201821-30853 delete -f testdata/storage-provisioner/pod.yaml: (12.51144641s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210813201821-30853 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [cc2ab692-c07b-4a0e-a1b9-34a4b41987ee] Pending
helpers_test.go:343: "sp-pod" [cc2ab692-c07b-4a0e-a1b9-34a4b41987ee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [cc2ab692-c07b-4a0e-a1b9-34a4b41987ee] Running
E0813 20:22:14.038931   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.019258961s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210813201821-30853 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (67.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "echo hello"
functional_test.go:1515: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1546: (dbg) Run:  kubectl --context functional-20210813201821-30853 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-h7lnv" [159f4fd3-95e8-4ab0-aa44-a73404481cbf] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-h7lnv" [159f4fd3-95e8-4ab0-aa44-a73404481cbf] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.032348301s
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813201821-30853 exec mysql-9bbbc5bbb-h7lnv -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210813201821-30853 exec mysql-9bbbc5bbb-h7lnv -- mysql -ppassword -e "show databases;": exit status 1 (578.254318ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813201821-30853 exec mysql-9bbbc5bbb-h7lnv -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210813201821-30853 exec mysql-9bbbc5bbb-h7lnv -- mysql -ppassword -e "show databases;": exit status 1 (300.717586ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813201821-30853 exec mysql-9bbbc5bbb-h7lnv -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210813201821-30853 exec mysql-9bbbc5bbb-h7lnv -- mysql -ppassword -e "show databases;": exit status 1 (430.690779ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813201821-30853 exec mysql-9bbbc5bbb-h7lnv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/30853/hosts within VM
functional_test.go:1679: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo cat /etc/test/nested/copy/30853/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/30853.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo cat /etc/ssl/certs/30853.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/30853.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo cat /usr/share/ca-certificates/30853.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /etc/ssl/certs/308532.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo cat /etc/ssl/certs/308532.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/308532.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo cat /usr/share/ca-certificates/308532.pem"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210813201821-30853 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33
functional_test.go:239: (dbg) Done: docker pull busybox:1.33: (1.290466334s)
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210813201821-30853
functional_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 image load docker.io/library/busybox:load-functional-20210813201821-30853
2021/08/13 20:21:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813201821-30853 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210813201821-30853
--- PASS: TestFunctional/parallel/LoadImage (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Done: docker pull busybox:1.32: (1.317758629s)
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210813201821-30853
functional_test.go:344: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 image load docker.io/library/busybox:remove-functional-20210813201821-30853

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201821-30853 image load docker.io/library/busybox:remove-functional-20210813201821-30853: (1.375361836s)
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 image rm docker.io/library/busybox:remove-functional-20210813201821-30853
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813201821-30853 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Done: docker pull busybox:1.31: (1.270082145s)
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210813201821-30853
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210813201821-30853
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 image load /home/jenkins/workspace/KVM_Linux_crio_integration/busybox.tar
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813201821-30853 -- sudo crictl images
E0813 20:21:53.557950   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 20:21:53.563614   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 20:21:53.573825   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 20:21:53.594091   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 20:21:53.634350   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 20:21:53.714628   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 20:21:53.875060   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/LoadImageFromFile (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (5.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 image build -t localhost/my-image:functional-20210813201821-30853 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201821-30853 image build -t localhost/my-image:functional-20210813201821-30853 testdata/build: (5.562732601s)
functional_test.go:412: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210813201821-30853 image build -t localhost/my-image:functional-20210813201821-30853 testdata/build:
STEP 1: FROM busybox
STEP 2: RUN true
--> ff6fea52baf
STEP 3: ADD content.txt /
STEP 4: COMMIT localhost/my-image:functional-20210813201821-30853
--> 48727ce1fa8
48727ce1fa876931b266a3e74349c7392e219596974996082f3fd63a3d3e0659
functional_test.go:415: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20210813201821-30853 image build -t localhost/my-image:functional-20210813201821-30853 testdata/build:
Completed short name "busybox" with unqualified-search registries (origin: /etc/containers/registries.conf)
Getting image source signatures
Copying blob sha256:b71f96345d44b237decc0c2d6c2f9ad0d17fde83dad7579608f1f0764d9686f2
Copying config sha256:69593048aa3acfee0f75f20b77acb549de2472063053f6730c4091b53f2dfb02
Writing manifest to image destination
Storing signatures
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813201821-30853 -- sudo crictl inspecti localhost/my-image:functional-20210813201821-30853
--- PASS: TestFunctional/parallel/BuildImage (5.93s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 image ls
functional_test.go:446: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210813201821-30853 image ls:
localhost/my-image:functional-20210813201821-30853
localhost/minikube-local-cache-test:functional-20210813201821-30853
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/busybox:load-functional-20210813201821-30853
docker.io/library/busybox:latest
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo systemctl is-active docker": exit status 1 (212.351293ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo systemctl is-active containerd"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo systemctl is-active containerd": exit status 1 (210.415233ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210813201821-30853 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210813201821-30853 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.104.137.54 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210813201821-30853 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1206: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1245: Took "243.613424ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1259: Took "54.749682ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1295: Took "263.137751ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1308: Took "64.003642ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210813201821-30853 /tmp/mounttest828561299:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1628886101560851633" to /tmp/mounttest828561299/created-by-test
functional_test_mount_test.go:110: wrote "test-1628886101560851633" to /tmp/mounttest828561299/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1628886101560851633" to /tmp/mounttest828561299/test-1628886101560851633
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (231.626457ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 13 20:21 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 13 20:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 13 20:21 test-1628886101560851633
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh cat /mount-9p/test-1628886101560851633

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210813201821-30853 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [14c4f589-6159-4e35-8791-10d01e3459b3] Pending
helpers_test.go:343: "busybox-mount" [14c4f589-6159-4e35-8791-10d01e3459b3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [14c4f589-6159-4e35-8791-10d01e3459b3] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.020159092s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20210813201821-30853 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh stat /mount-9p/created-by-test
E0813 20:21:54.196181   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813201821-30853 /tmp/mounttest828561299:/mount-9p --alsologtostderr -v=1] ...
E0813 20:21:54.836965   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210813201821-30853 /tmp/mounttest279073144:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (204.954765ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh -- ls -la /mount-9p
E0813 20:21:56.117688   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813201821-30853 /tmp/mounttest279073144:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh "sudo umount -f /mount-9p": exit status 1 (207.956539ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20210813201821-30853 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813201821-30853 /tmp/mounttest279073144:/mount-9p --alsologtostderr -v=1 --port 46464] ...
E0813 20:21:58.677922   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 20:22:03.798275   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210813201821-30853
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210813201821-30853
--- PASS: TestFunctional/delete_busybox_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210813201821-30853
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210813201821-30853
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210813202418-30853 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210813202418-30853 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.849885ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210813202418-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"f312cbb0-28a8-49f3-a326-75a5bf814b14","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig"},"datacontenttype":"application/json","id":"9b199a11-f2b1-4081-9a81-7b5a6e776a75","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"63a2ba29-32b4-4d7e-981d-5918fea52a40","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube"},"datacontenttype":"application/json","id":"9064ad45-7bb7-49e6-b225-99ed86ea9516","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"a5b7f6f3-8ac1-4269-a955-1c2c9e70b40e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"210d1f63-2d62-44c0-887e-b5ab35bbcd2d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210813202418-30853" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210813202418-30853
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (135.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202419-30853 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0813 20:24:37.403329   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 20:26:07.714151   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:26:07.719409   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:26:07.729666   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:26:07.749926   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:26:07.790159   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:26:07.870491   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:26:08.031135   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:26:08.351629   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:26:08.992595   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:26:10.273263   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:26:12.834198   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:26:17.954699   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:26:28.195009   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202419-30853 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m14.859662656s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (135.29s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210813202419-30853 -v 3 --alsologtostderr
E0813 20:31:07.714629   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:31:35.398605   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
multinode_test.go:106: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210813202419-30853 -v 3 --alsologtostderr: (54.480518927s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (1.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 status --output json --alsologtostderr
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 cp testdata/cp-test.txt multinode-20210813202419-30853-m02:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 ssh -n multinode-20210813202419-30853-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 cp testdata/cp-test.txt multinode-20210813202419-30853-m03:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 ssh -n multinode-20210813202419-30853-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (1.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202419-30853 node stop m03: (2.089918077s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202419-30853 status: exit status 7 (414.784276ms)

                                                
                                                
-- stdout --
	multinode-20210813202419-30853
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210813202419-30853-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210813202419-30853-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202419-30853 status --alsologtostderr: exit status 7 (422.355809ms)

                                                
                                                
-- stdout --
	multinode-20210813202419-30853
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210813202419-30853-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210813202419-30853-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:31:47.821041    6650 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:31:47.821236    6650 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:31:47.821245    6650 out.go:311] Setting ErrFile to fd 2...
	I0813 20:31:47.821249    6650 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:31:47.821333    6650 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:31:47.821492    6650 out.go:305] Setting JSON to false
	I0813 20:31:47.821510    6650 mustload.go:65] Loading cluster: multinode-20210813202419-30853
	I0813 20:31:47.821771    6650 config.go:177] Loaded profile config "multinode-20210813202419-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:31:47.821787    6650 status.go:253] checking status of multinode-20210813202419-30853 ...
	I0813 20:31:47.822128    6650 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:31:47.822169    6650 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:31:47.832955    6650 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41087
	I0813 20:31:47.833364    6650 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:31:47.833865    6650 main.go:130] libmachine: Using API Version  1
	I0813 20:31:47.833887    6650 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:31:47.834216    6650 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:31:47.834404    6650 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetState
	I0813 20:31:47.837312    6650 status.go:328] multinode-20210813202419-30853 host status = "Running" (err=<nil>)
	I0813 20:31:47.837338    6650 host.go:66] Checking if "multinode-20210813202419-30853" exists ...
	I0813 20:31:47.837740    6650 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:31:47.837781    6650 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:31:47.850007    6650 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41895
	I0813 20:31:47.850418    6650 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:31:47.850936    6650 main.go:130] libmachine: Using API Version  1
	I0813 20:31:47.850961    6650 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:31:47.851296    6650 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:31:47.851477    6650 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetIP
	I0813 20:31:47.856582    6650 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:31:47.856980    6650 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:31:47.857009    6650 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:31:47.857102    6650 host.go:66] Checking if "multinode-20210813202419-30853" exists ...
	I0813 20:31:47.857441    6650 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:31:47.857481    6650 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:31:47.868455    6650 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37357
	I0813 20:31:47.868851    6650 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:31:47.869226    6650 main.go:130] libmachine: Using API Version  1
	I0813 20:31:47.869249    6650 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:31:47.869613    6650 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:31:47.869818    6650 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .DriverName
	I0813 20:31:47.870019    6650 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:31:47.870063    6650 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHHostname
	I0813 20:31:47.875492    6650 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:31:47.875925    6650 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:ef:64", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:24:33 +0000 UTC Type:0 Mac:52:54:00:16:ef:64 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-20210813202419-30853 Clientid:01:52:54:00:16:ef:64}
	I0813 20:31:47.875956    6650 main.go:130] libmachine: (multinode-20210813202419-30853) DBG | domain multinode-20210813202419-30853 has defined IP address 192.168.39.64 and MAC address 52:54:00:16:ef:64 in network mk-multinode-20210813202419-30853
	I0813 20:31:47.876051    6650 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHPort
	I0813 20:31:47.876220    6650 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHKeyPath
	I0813 20:31:47.876385    6650 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetSSHUsername
	I0813 20:31:47.876516    6650 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853/id_rsa Username:docker}
	I0813 20:31:47.974936    6650 ssh_runner.go:149] Run: systemctl --version
	I0813 20:31:47.981069    6650 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:31:47.994489    6650 kubeconfig.go:93] found "multinode-20210813202419-30853" server: "https://192.168.39.64:8443"
	I0813 20:31:47.994512    6650 api_server.go:164] Checking apiserver status ...
	I0813 20:31:47.994542    6650 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:31:48.005065    6650 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2597/cgroup
	I0813 20:31:48.011130    6650 api_server.go:180] apiserver freezer: "10:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod914dc216865e390473fe61a3bb624cd9.slice/crio-bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23.scope"
	I0813 20:31:48.011191    6650 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod914dc216865e390473fe61a3bb624cd9.slice/crio-bbb34d91753401fe6a8a7e39cebe8a115a287f79dc3be0621bcb01acb8803c23.scope/freezer.state
	I0813 20:31:48.017742    6650 api_server.go:202] freezer state: "THAWED"
	I0813 20:31:48.017763    6650 api_server.go:239] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0813 20:31:48.023495    6650 api_server.go:265] https://192.168.39.64:8443/healthz returned 200:
	ok
	I0813 20:31:48.023516    6650 status.go:419] multinode-20210813202419-30853 apiserver status = Running (err=<nil>)
	I0813 20:31:48.023525    6650 status.go:255] multinode-20210813202419-30853 status: &{Name:multinode-20210813202419-30853 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 20:31:48.023546    6650 status.go:253] checking status of multinode-20210813202419-30853-m02 ...
	I0813 20:31:48.023851    6650 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:31:48.023889    6650 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:31:48.034679    6650 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43275
	I0813 20:31:48.035099    6650 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:31:48.035518    6650 main.go:130] libmachine: Using API Version  1
	I0813 20:31:48.035538    6650 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:31:48.035861    6650 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:31:48.036091    6650 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetState
	I0813 20:31:48.039261    6650 status.go:328] multinode-20210813202419-30853-m02 host status = "Running" (err=<nil>)
	I0813 20:31:48.039276    6650 host.go:66] Checking if "multinode-20210813202419-30853-m02" exists ...
	I0813 20:31:48.039620    6650 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:31:48.039657    6650 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:31:48.049778    6650 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42101
	I0813 20:31:48.050180    6650 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:31:48.050627    6650 main.go:130] libmachine: Using API Version  1
	I0813 20:31:48.050653    6650 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:31:48.051010    6650 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:31:48.051171    6650 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetIP
	I0813 20:31:48.056216    6650 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:31:48.056602    6650 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:31:48.056625    6650 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:31:48.056712    6650 host.go:66] Checking if "multinode-20210813202419-30853-m02" exists ...
	I0813 20:31:48.057044    6650 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:31:48.057086    6650 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:31:48.067202    6650 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45209
	I0813 20:31:48.067535    6650 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:31:48.067973    6650 main.go:130] libmachine: Using API Version  1
	I0813 20:31:48.067996    6650 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:31:48.068340    6650 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:31:48.068496    6650 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .DriverName
	I0813 20:31:48.068675    6650 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:31:48.068695    6650 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHHostname
	I0813 20:31:48.073509    6650 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:31:48.073908    6650 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:96:4b", ip: ""} in network mk-multinode-20210813202419-30853: {Iface:virbr1 ExpiryTime:2021-08-13 21:25:54 +0000 UTC Type:0 Mac:52:54:00:81:96:4b Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-20210813202419-30853-m02 Clientid:01:52:54:00:81:96:4b}
	I0813 20:31:48.073947    6650 main.go:130] libmachine: (multinode-20210813202419-30853-m02) DBG | domain multinode-20210813202419-30853-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:81:96:4b in network mk-multinode-20210813202419-30853
	I0813 20:31:48.074052    6650 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHPort
	I0813 20:31:48.074198    6650 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHKeyPath
	I0813 20:31:48.074339    6650 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetSSHUsername
	I0813 20:31:48.074425    6650 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202419-30853-m02/id_rsa Username:docker}
	I0813 20:31:48.162271    6650 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:31:48.173026    6650 status.go:255] multinode-20210813202419-30853-m02 status: &{Name:multinode-20210813202419-30853-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0813 20:31:48.173060    6650 status.go:253] checking status of multinode-20210813202419-30853-m03 ...
	I0813 20:31:48.173365    6650 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:31:48.173410    6650 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:31:48.185141    6650 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45981
	I0813 20:31:48.185530    6650 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:31:48.185994    6650 main.go:130] libmachine: Using API Version  1
	I0813 20:31:48.186029    6650 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:31:48.186342    6650 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:31:48.186491    6650 main.go:130] libmachine: (multinode-20210813202419-30853-m03) Calling .GetState
	I0813 20:31:48.189300    6650 status.go:328] multinode-20210813202419-30853-m03 host status = "Stopped" (err=<nil>)
	I0813 20:31:48.189315    6650 status.go:341] host is not running, skipping remaining checks
	I0813 20:31:48.189319    6650 status.go:255] multinode-20210813202419-30853-m03 status: &{Name:multinode-20210813202419-30853-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.93s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (48.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:235: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 node start m03 --alsologtostderr
E0813 20:31:53.558024   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
multinode_test.go:235: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202419-30853 node start m03 --alsologtostderr: (48.363658021s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (48.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (181.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813202419-30853
multinode_test.go:271: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20210813202419-30853
multinode_test.go:271: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20210813202419-30853: (7.155318523s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202419-30853 --wait=true -v=8 --alsologtostderr
multinode_test.go:276: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202419-30853 --wait=true -v=8 --alsologtostderr: (2m54.636737411s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813202419-30853
--- PASS: TestMultiNode/serial/RestartKeepsNodes (181.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202419-30853 node delete m03: (1.344088538s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 status --alsologtostderr
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (4.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 stop
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202419-30853 stop: (4.238437599s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202419-30853 status: exit status 7 (80.128046ms)

                                                
                                                
-- stdout --
	multinode-20210813202419-30853
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210813202419-30853-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202419-30853 status --alsologtostderr: exit status 7 (80.041489ms)

                                                
                                                
-- stdout --
	multinode-20210813202419-30853
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210813202419-30853-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:35:45.327540    7761 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:35:45.327635    7761 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:35:45.327645    7761 out.go:311] Setting ErrFile to fd 2...
	I0813 20:35:45.327648    7761 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:35:45.327751    7761 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:35:45.327899    7761 out.go:305] Setting JSON to false
	I0813 20:35:45.327915    7761 mustload.go:65] Loading cluster: multinode-20210813202419-30853
	I0813 20:35:45.328219    7761 config.go:177] Loaded profile config "multinode-20210813202419-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:35:45.328233    7761 status.go:253] checking status of multinode-20210813202419-30853 ...
	I0813 20:35:45.328563    7761 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:35:45.328602    7761 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:35:45.339112    7761 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36535
	I0813 20:35:45.339557    7761 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:35:45.340147    7761 main.go:130] libmachine: Using API Version  1
	I0813 20:35:45.340169    7761 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:35:45.340522    7761 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:35:45.340668    7761 main.go:130] libmachine: (multinode-20210813202419-30853) Calling .GetState
	I0813 20:35:45.343426    7761 status.go:328] multinode-20210813202419-30853 host status = "Stopped" (err=<nil>)
	I0813 20:35:45.343442    7761 status.go:341] host is not running, skipping remaining checks
	I0813 20:35:45.343447    7761 status.go:255] multinode-20210813202419-30853 status: &{Name:multinode-20210813202419-30853 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 20:35:45.343465    7761 status.go:253] checking status of multinode-20210813202419-30853-m02 ...
	I0813 20:35:45.343744    7761 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 20:35:45.343773    7761 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:35:45.353774    7761 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34303
	I0813 20:35:45.354132    7761 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:35:45.354528    7761 main.go:130] libmachine: Using API Version  1
	I0813 20:35:45.354546    7761 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:35:45.354849    7761 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:35:45.355012    7761 main.go:130] libmachine: (multinode-20210813202419-30853-m02) Calling .GetState
	I0813 20:35:45.357474    7761 status.go:328] multinode-20210813202419-30853-m02 host status = "Stopped" (err=<nil>)
	I0813 20:35:45.357488    7761 status.go:341] host is not running, skipping remaining checks
	I0813 20:35:45.357496    7761 status.go:255] multinode-20210813202419-30853-m02 status: &{Name:multinode-20210813202419-30853-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (4.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (152.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:335: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202419-30853 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0813 20:36:07.714222   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:36:53.557841   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 20:38:16.607097   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
multinode_test.go:335: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202419-30853 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m31.408451285s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202419-30853 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (152.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (57.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813202419-30853
multinode_test.go:433: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202419-30853-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210813202419-30853-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (103.293547ms)

                                                
                                                
-- stdout --
	* [multinode-20210813202419-30853-m02] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210813202419-30853-m02' is duplicated with machine name 'multinode-20210813202419-30853-m02' in profile 'multinode-20210813202419-30853'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202419-30853-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:441: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202419-30853-m03 --driver=kvm2  --container-runtime=crio: (55.877169112s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210813202419-30853
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210813202419-30853: exit status 80 (233.260964ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210813202419-30853
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210813202419-30853-m03 already exists in multinode-20210813202419-30853-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210813202419-30853-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (57.20s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.28s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (11.279399796s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.28s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.26s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (10.264173949s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.26s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.64s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (9.640826339s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.64s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.28s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (8.280661209s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.28s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (17s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (16.996457335s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (17.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (16.23s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (16.227239082s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (16.23s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (16.53s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (16.531707123s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (16.53s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (15.12s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (15.116861634s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (15.12s)

                                                
                                    
x
+
TestScheduledStopUnix (94.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210813204426-30853 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210813204426-30853 --memory=2048 --driver=kvm2  --container-runtime=crio: (54.960279783s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813204426-30853 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210813204426-30853 -n scheduled-stop-20210813204426-30853
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813204426-30853 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813204426-30853 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813204426-30853 -n scheduled-stop-20210813204426-30853
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210813204426-30853
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813204426-30853 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210813204426-30853
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20210813204426-30853: exit status 7 (67.67863ms)

                                                
                                                
-- stdout --
	scheduled-stop-20210813204426-30853
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813204426-30853 -n scheduled-stop-20210813204426-30853
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813204426-30853 -n scheduled-stop-20210813204426-30853: exit status 7 (63.998495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20210813204426-30853" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210813204426-30853
--- PASS: TestScheduledStopUnix (94.05s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (228.21s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.6.2.328168347.exe start -p running-upgrade-20210813204707-30853 --memory=2200 --vm-driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.6.2.328168347.exe start -p running-upgrade-20210813204707-30853 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m27.089178957s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20210813204707-30853 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20210813204707-30853 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m19.816741196s)
helpers_test.go:176: Cleaning up "running-upgrade-20210813204707-30853" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210813204707-30853
--- PASS: TestRunningBinaryUpgrade (228.21s)

                                                
                                    
x
+
TestPause/serial/Start (186.51s)

                                                
                                                
=== RUN   TestPause/serial/Start

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210813204600-30853 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210813204600-30853 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (3m6.510385703s)
--- PASS: TestPause/serial/Start (186.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210813204703-30853 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20210813204703-30853 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (455.075143ms)

                                                
                                                
-- stdout --
	* [false-20210813204703-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:47:04.055273    2298 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:47:04.055342    2298 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:47:04.055347    2298 out.go:311] Setting ErrFile to fd 2...
	I0813 20:47:04.055352    2298 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:47:04.055461    2298 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:47:04.055732    2298 out.go:305] Setting JSON to false
	I0813 20:47:04.093177    2298 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":8986,"bootTime":1628878638,"procs":172,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:47:04.093316    2298 start.go:121] virtualization: kvm guest
	I0813 20:47:04.095648    2298 out.go:177] * [false-20210813204703-30853] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:47:04.097028    2298 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:47:04.095819    2298 notify.go:169] Checking for updates...
	I0813 20:47:04.098459    2298 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:47:04.099925    2298 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:47:04.101519    2298 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:47:04.102059    2298 config.go:177] Loaded profile config "kubernetes-upgrade-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:47:04.102199    2298 config.go:177] Loaded profile config "offline-crio-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:47:04.102315    2298 config.go:177] Loaded profile config "pause-20210813204600-30853": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:47:04.102410    2298 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:47:04.445317    2298 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 20:47:04.445363    2298 start.go:278] selected driver: kvm2
	I0813 20:47:04.445371    2298 start.go:751] validating driver "kvm2" against <nil>
	I0813 20:47:04.445397    2298 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:47:04.447932    2298 out.go:177] 
	W0813 20:47:04.448073    2298 out.go:242] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0813 20:47:04.449337    2298 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20210813204703-30853" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20210813204703-30853
--- PASS: TestNetworkPlugins/group/false (0.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.63s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210813204600-30853 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210813204600-30853 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (6.608173749s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.63s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210813204600-30853 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210813204600-30853 --alsologtostderr -v=5
pause_test.go:129: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20210813204600-30853 --alsologtostderr -v=5: (1.001224156s)
--- PASS: TestPause/serial/DeletePaused (1.00s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.2s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-amd64 profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.201109312s)
--- PASS: TestPause/serial/VerifyDeletedResources (16.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (132.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210813204703-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=crio
E0813 20:51:07.714202   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210813204703-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=crio: (2m12.286477261s)
--- PASS: TestNetworkPlugins/group/auto/Start (132.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (112.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210813204703-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210813204703-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m52.415058864s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (112.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210813204703-30853 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210813204703-30853 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-z5tdt" [d4af1e79-3044-4c24-9bf9-88e751d5fa09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-z5tdt" [d4af1e79-3044-4c24-9bf9-88e751d5fa09] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.009852638s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210813204703-30853 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210813204703-30853 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210813204703-30853 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (151.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210813204704-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210813204704-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=crio: (2m31.827504635s)
--- PASS: TestNetworkPlugins/group/cilium/Start (151.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-gx8wq" [340bd4d5-64f4-46a8-b13e-9aa98fde0859] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.026238131s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210813204703-30853 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (17.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210813204703-30853 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-cz62x" [572a1b94-9b8c-458d-9833-e1ad070ae8af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-cz62x" [572a1b94-9b8c-458d-9833-e1ad070ae8af] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 17.014313233s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (17.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20210813204703-30853 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kindnet-20210813204703-30853 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kindnet-20210813204703-30853 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (127.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210813204704-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p calico-20210813204704-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=crio: (2m7.599966084s)
--- PASS: TestNetworkPlugins/group/calico/Start (127.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:208: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20210813204857-30853
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (118.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210813204704-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=kvm2  --container-runtime=crio
E0813 20:54:56.608178   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210813204704-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=kvm2  --container-runtime=crio: (1m58.985304806s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (118.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-9sbb4" [dcf49e37-6607-4f13-8cf0-15a4e6c18e85] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.022871254s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210813204704-30853 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (13.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210813204704-30853 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-x24f9" [39b8663c-d389-407a-9550-20dbccd2502e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-x24f9" [39b8663c-d389-407a-9550-20dbccd2502e] Running
E0813 20:56:07.714247   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 13.019754724s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (13.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210813204704-30853 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210813204704-30853 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210813204704-30853 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (113.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210813204703-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210813204703-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m53.592010186s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (113.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:343: "calico-node-zm6xl" [ca5e81d7-e95f-4461-81e4-0a11bb31a3fb] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.051584082s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20210813204704-30853 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context calico-20210813204704-30853 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-5r6vl" [3c07debd-ac7d-40a9-8b96-1bea7aa33d1b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-5r6vl" [3c07debd-ac7d-40a9-8b96-1bea7aa33d1b] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.0293811s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813204704-30853 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:181: (dbg) Run:  kubectl --context calico-20210813204704-30853 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:231: (dbg) Run:  kubectl --context calico-20210813204704-30853 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210813204704-30853 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (17.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210813204704-30853 replace --force -f testdata/netcat-deployment.yaml
net_test.go:131: (dbg) Done: kubectl --context custom-weave-20210813204704-30853 replace --force -f testdata/netcat-deployment.yaml: (6.522515199s)
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-fkwfq" [d3a7c15f-368b-4287-ba2d-99062d3d969c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-fkwfq" [d3a7c15f-368b-4287-ba2d-99062d3d969c] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 11.010600017s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (17.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (124.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-20210813204703-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=crio
E0813 20:56:53.558116   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p flannel-20210813204703-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m4.047286298s)
--- PASS: TestNetworkPlugins/group/flannel/Start (124.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (117.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210813204703-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=crio
E0813 20:58:08.233062   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:08.238366   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:08.248596   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:08.268904   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:08.309276   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:08.390088   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:08.550310   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210813204703-30853 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m57.093193616s)
--- PASS: TestNetworkPlugins/group/bridge/Start (117.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210813204703-30853 "pgrep -a kubelet"
E0813 20:58:08.870490   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210813204703-30853 replace --force -f testdata/netcat-deployment.yaml
E0813 20:58:09.511581   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-jdclc" [4c948452-2c87-4532-badf-1afbb8a80f58] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0813 20:58:10.792277   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:13.353173   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
helpers_test.go:343: "netcat-66fbc655d5-jdclc" [4c948452-2c87-4532-badf-1afbb8a80f58] Running
E0813 20:58:18.473997   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.009609699s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813204703-30853 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210813204703-30853 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210813204703-30853 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (140.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210813205823-30853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.14.0
E0813 20:58:28.714348   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:49.195245   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:52.658194   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:52.663601   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:52.674616   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:52.695511   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:52.735829   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:52.816190   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:52.976388   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:53.296930   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:53.937989   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 20:58:55.219092   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210813205823-30853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.14.0: (2m20.924925242s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (140.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (7.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...
E0813 20:58:57.780318   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
helpers_test.go:343: "kube-flannel-ds-amd64-z2f4n" [f33baba7-de77-4d36-8291-419972809508] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 7.26651135s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (7.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210813204703-30853 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210813204703-30853 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-9n4bv" [83773efa-46d9-4407-918a-460e851d85d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0813 20:59:02.901165   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-9n4bv" [83773efa-46d9-4407-918a-460e851d85d3] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.017827814s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-20210813204703-30853 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context flannel-20210813204703-30853 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-2xbfh" [0e883a52-008c-4ea8-9c47-2681f98cc1e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-2xbfh" [0e883a52-008c-4ea8-9c47-2681f98cc1e0] Running
E0813 20:59:10.759480   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 20:59:13.141885   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.010307951s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210813204703-30853 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210813204703-30853 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210813204703-30853 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:162: (dbg) Run:  kubectl --context flannel-20210813204703-30853 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:181: (dbg) Run:  kubectl --context flannel-20210813204703-30853 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/HairPin
net_test.go:231: (dbg) Run:  kubectl --context flannel-20210813204703-30853 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.28s)
E0813 21:09:23.861147   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:09:29.929811   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (179.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210813205915-30853 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210813205915-30853 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (2m59.94202829s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (179.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (108.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210813205917-30853 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3
E0813 20:59:30.155666   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 20:59:33.622891   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 21:00:14.584066   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210813205917-30853 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3: (1m48.47463181s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (108.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210813205823-30853 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [86dcdb86-fc79-11eb-b972-525400ed6e80] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [86dcdb86-fc79-11eb-b972-525400ed6e80] Running
E0813 21:00:52.076820   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 21:00:53.780080   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
E0813 21:00:53.785390   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
E0813 21:00:53.795639   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
E0813 21:00:53.815914   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
E0813 21:00:53.856195   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
E0813 21:00:53.937187   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
E0813 21:00:54.097882   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
E0813 21:00:54.418457   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
E0813 21:00:55.059291   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.049393551s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210813205823-30853 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210813205823-30853 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0813 21:00:56.340031   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210813205823-30853 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210813205823-30853 --alsologtostderr -v=3
E0813 21:00:58.900946   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210813205823-30853 --alsologtostderr -v=3: (3.108642506s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813205823-30853 -n old-k8s-version-20210813205823-30853
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813205823-30853 -n old-k8s-version-20210813205823-30853: exit status 7 (83.21248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210813205823-30853 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (472.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210813205823-30853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210813205823-30853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.14.0: (7m51.74361446s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813205823-30853 -n old-k8s-version-20210813205823-30853
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (472.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (132.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210813210102-30853 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3
E0813 21:01:04.021892   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210813210102-30853 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3: (2m12.85469952s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (132.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210813205917-30853 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [45ff4e70-4d5e-4c3e-af12-9d3fedc85ecc] Pending
helpers_test.go:343: "busybox" [45ff4e70-4d5e-4c3e-af12-9d3fedc85ecc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0813 21:01:07.715028   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
helpers_test.go:343: "busybox" [45ff4e70-4d5e-4c3e-af12-9d3fedc85ecc] Running
E0813 21:01:14.262998   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.03472211s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210813205917-30853 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210813205917-30853 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210813205917-30853 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030675453s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210813205917-30853 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (4.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210813205917-30853 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210813205917-30853 --alsologtostderr -v=3: (4.117861311s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (4.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813205917-30853 -n embed-certs-20210813205917-30853
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813205917-30853 -n embed-certs-20210813205917-30853: exit status 7 (88.136198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210813205917-30853 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (428.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210813205917-30853 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3
E0813 21:01:24.934664   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:24.939958   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:24.950218   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:24.970492   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:25.010809   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:25.091150   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:25.251796   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:25.572732   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:26.213777   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:27.494275   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:30.055059   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:34.743474   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:35.176076   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:36.504510   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 21:01:45.416336   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:51.433401   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:52.336276   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:52.346728   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:52.367014   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:52.408023   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:52.488771   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:52.649006   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:52.970034   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:53.558621   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 21:01:53.610820   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:54.891469   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:01:57.452514   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:02:02.573436   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:02:05.896868   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:02:12.814088   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210813205917-30853 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3: (7m8.641360708s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813205917-30853 -n embed-certs-20210813205917-30853
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (428.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210813205915-30853 create -f testdata/busybox.yaml
E0813 21:02:15.704057   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [b6fdd94d-b6c8-4315-9193-8a937f605eaf] Pending
helpers_test.go:343: "busybox" [b6fdd94d-b6c8-4315-9193-8a937f605eaf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [b6fdd94d-b6c8-4315-9193-8a937f605eaf] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.036219962s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210813205915-30853 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210813205915-30853 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210813205915-30853 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.238930996s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210813205915-30853 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (63.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210813205915-30853 --alsologtostderr -v=3
E0813 21:02:33.295249   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:02:46.857464   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:03:08.233177   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:09.455763   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:09.461887   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:09.472174   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:09.492456   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:09.532779   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:09.613121   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:09.773850   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:10.094986   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:10.736082   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:12.016304   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:14.256439   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:03:14.576771   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210813205915-30853 --alsologtostderr -v=3: (1m3.464773422s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (63.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210813210102-30853 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [61cc2db7-fac6-4cba-a9d5-268bc2d39a93] Pending
helpers_test.go:343: "busybox" [61cc2db7-fac6-4cba-a9d5-268bc2d39a93] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0813 21:03:19.697537   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
helpers_test.go:343: "busybox" [61cc2db7-fac6-4cba-a9d5-268bc2d39a93] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 10.043125615s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210813210102-30853 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210813210102-30853 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210813210102-30853 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.054982308s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210813210102-30853 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210813210102-30853 --alsologtostderr -v=3
E0813 21:03:29.937933   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210813210102-30853 --alsologtostderr -v=3: (3.106074366s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (3.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813210102-30853 -n default-k8s-different-port-20210813210102-30853
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813210102-30853 -n default-k8s-different-port-20210813210102-30853: exit status 7 (70.217215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210813210102-30853 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (415.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210813210102-30853 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210813210102-30853 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3: (6m54.969546674s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813210102-30853 -n default-k8s-different-port-20210813210102-30853
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (415.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813205915-30853 -n no-preload-20210813205915-30853
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813205915-30853 -n no-preload-20210813205915-30853: exit status 7 (70.810512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210813205915-30853 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (447.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210813205915-30853 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0
E0813 21:03:35.917663   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:37.624578   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
E0813 21:03:50.418317   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:52.658226   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:55.706412   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:55.711722   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:55.722040   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:55.742317   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:55.782662   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:55.863093   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:56.023522   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:56.344523   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:56.985453   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:03:58.265873   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:01.296884   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:01.472536   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:01.477841   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:01.488130   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:01.508456   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:01.548846   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:01.629236   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:01.789680   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:02.110414   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:02.751012   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:04.031820   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:06.418028   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:07.364844   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:08.778435   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:04:12.486048   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:16.659083   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:20.345654   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:22.726987   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:31.378975   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:36.177224   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:04:37.139359   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:04:43.208060   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:05:18.099867   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:05:24.169196   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:05:53.299215   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
E0813 21:05:53.780064   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
E0813 21:06:07.715117   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
E0813 21:06:21.465357   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813204704-30853/client.crt: no such file or directory
E0813 21:06:24.934374   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:06:40.020154   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory
E0813 21:06:46.089601   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/bridge-20210813204703-30853/client.crt: no such file or directory
E0813 21:06:51.433707   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:06:52.618791   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/calico-20210813204704-30853/client.crt: no such file or directory
E0813 21:06:53.558539   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 21:07:20.017380   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:08:08.232675   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204703-30853/client.crt: no such file or directory
E0813 21:08:09.456393   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210813205915-30853 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (7m26.690861929s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813205915-30853 -n no-preload-20210813205915-30853
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (447.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-77xxt" [6e067ab8-6535-4984-8dcf-037619871a7e] Running
E0813 21:08:37.140084   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/enable-default-cni-20210813204703-30853/client.crt: no such file or directory
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016173439s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-77xxt" [6e067ab8-6535-4984-8dcf-037619871a7e] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008837473s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210813205917-30853 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210813205917-30853 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-264rf" [7ba3c406-fc7a-11eb-b132-525400ed6e80] Running
E0813 21:08:52.658413   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kindnet-20210813204703-30853/client.crt: no such file or directory
E0813 21:08:55.706635   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/flannel-20210813204703-30853/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018816999s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-264rf" [7ba3c406-fc7a-11eb-b132-525400ed6e80] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009163386s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210813205823-30853 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210813205823-30853 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (87.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210813210910-30853 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210813210910-30853 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (1m27.540230495s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (87.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-bjd2q" [b006335d-65ed-49c1-96b6-8d753f5fbef8] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.026290336s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-bjd2q" [b006335d-65ed-49c1-96b6-8d753f5fbef8] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012905627s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210813210102-30853 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210813210102-30853 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210813210910-30853 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210813210910-30853 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030258716s)
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (63.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210813210910-30853 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210813210910-30853 --alsologtostderr -v=3: (1m3.409475915s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (63.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-vl8vp" [772a7e9a-469e-46a7-9d84-da2b0f029cb7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-vl8vp" [772a7e9a-469e-46a7-9d84-da2b0f029cb7] Running
E0813 21:11:05.379282   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205823-30853/client.crt: no such file or directory
E0813 21:11:07.714099   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201821-30853/client.crt: no such file or directory
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.017571419s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-vl8vp" [772a7e9a-469e-46a7-9d84-da2b0f029cb7] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011541798s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210813205915-30853 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20210813205915-30853 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813210910-30853 -n newest-cni-20210813210910-30853
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813210910-30853 -n newest-cni-20210813210910-30853: exit status 7 (65.795119ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210813210910-30853 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (73.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210813210910-30853 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0
E0813 21:11:51.434604   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204704-30853/client.crt: no such file or directory
E0813 21:11:53.558153   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200811-30853/client.crt: no such file or directory
E0813 21:12:06.819865   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205823-30853/client.crt: no such file or directory
E0813 21:12:15.786583   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
E0813 21:12:15.792430   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
E0813 21:12:15.802640   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
E0813 21:12:15.822883   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
E0813 21:12:15.863148   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
E0813 21:12:15.943454   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
E0813 21:12:16.104308   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
E0813 21:12:16.425264   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
E0813 21:12:17.065937   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
E0813 21:12:18.346154   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
E0813 21:12:20.906768   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
E0813 21:12:26.027907   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
E0813 21:12:36.268180   30853 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-27424-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813205915-30853/client.crt: no such file or directory
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210813210910-30853 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (1m13.000746048s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813210910-30853 -n newest-cni-20210813210910-30853
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (73.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210813210910-30853 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    

Test skip (28/269)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:212: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:467: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:286: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as crio container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20210813204703-30853" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20210813204703-30853
--- SKIP: TestNetworkPlugins/group/kubenet (0.29s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210813210102-30853" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210813210102-30853
--- SKIP: TestStartStop/group/disable-driver-mounts (0.27s)

                                                
                                    
Copied to clipboard